IPsec Offload: Meet Microsoft* DirectAccess*

first_img—– So while using IPsec to help create a VPN connection provides functionality that is secure and provides outside-in access to the corporate network, it requires additional configuration by the end user, is not seamless for either user or administrator, and is generally provided by an additional application running on the system. This is all non-optimal. Using IPsec to set up a VPN can be a bit of a pain because you have to key in an access code or password and it’s far from seamless. On the IT manager’s side, this setup does not eliminate security problems because the VPN tunnel only secures the network pipe once it is established. There is nothing stopping the end user from browsing the web on their work computer or somehow exposing it to a virus before connecting to the corporate network in a secured way. This has a few downsides from a manageability perspective. First, the security is compromised because of potential infections transferred from an insecure network to the corporate network due to lack of continuously active protection. Second, the manageability of this solution is lacking because enterprise systems outside of the corporate network are not manageable until the user manually connects to the VPN gateway. Microsoft*’s implementation of this functionality at the OS level, so each application can have its own secure IPsec tunnel. This can provide secure access both outside and inside of the corporate network. Up until recently, using IPsec internally has not been of much focus, but recent estimates suggest 80% of successful attacks come from internal threats, so encrypting and authenticating internal data is now in focus for IT administrators. Microsoft* DirectAccess* allows for this new seamless security model. Enter Microsoft* DirectAccess*. In Windows* Server 2008 r2 for servers and Windows* 7* for clients Microsoft* will be supporting a seamless IPsec support layer called DirectAccess*. What this will provide is the ability to integrate the encryption/authentication of IPsec directly into the Operating System so the end user connects securely outside and inside the corporate network to the systems and applications they need via IPsec. Because this is integrated into the OS, the set up of the security and connection details are more seamless from both an IT person and end user perspective. Initial configuration is obviously required, and each IT organization must set up the security policies to their own specifications, but once that is done the system is up and running. One technology that has been around for quite a while to help improve security is IPsec (aka, IP Security). IPsec is a set of protocols for securing and authenticating IP packets by encrypting their contents in an end-to-end manner. Most people are familiar with IPsec as the underlying technology for facilitating Virtual Private Network (VPN) connections from the outside of an organization’s LAN to inside the network. IPsec secures the Internet to Intranet tunnel in this case. Manageability, security, and performance are always hot topics in the computing world. At times the focus shifts between them as needs and technologies change, but these areas have remained key vectors of enterprise computing for a long time. However, in many cases these usability vectors conflict with each other. IT managers’ desire for security and manageability may lead to extra applications and process hoops for end users, which can decrease performance. Increasing the ability to remotely and seamlessly manage a pc almost always adds security headaches that must be dealt with. Enterprise IT design is always about finding the right tradeoffs and improving the process over time.center_img In order to solve this issue, Intel networking products offload the computationally expensive encryption engine (AES-128) onto the LAN Controller while the IPsec configuration, management, policy creations etc all remain in the OS to keep usability simple. Intel offers both dual port 1 and 10 Gigabit networking solutions that support not only solid performance on standard networking workloads and advanced virtualization features, but also the ability to offload IPsec in hardware to improve system performance under large IPsec I/O workloads. For companies looking to enable IPsec into their network environment using DirectAccess*, they have the potential to improve security, reduce complexity, and enhance manageability of their end clients. They just need to remember that in order to make this all work seamlessly on the server side without choking off processing performance, offloading the IPsec workloads to I/O hardware will be a requirement. Now this all sounds well and good… but what’s the catch? Well, a key angle here to note is that IPsec is a highly CPU intensive technology. Encryption and decryption of IP packets in real time can easily swamp a CPU core when attempting to push much more than a few hundred megabits of network data. For a typical end user system, a few megabits of data across a few IPsec connection applications will likely not cause much heartache, but for network servers that are hosting potentially thousands of simultaneous IPsec connections while trying to drive multiple Gigabits of I/O the performance results will be much more… uhh, what’s a nice way to say ‘unimpressive’? Intel® Ethernet® can deliver this support in adapter or down on motherboard form factors while supporting a wide range of Enterprise class performance and virtualization features. So is this a way to improve security and manageability without impacting performance? It seems that way to me. Ben HackerFor more information on DirectAccess* — http://www.microsoft.com/servers/directaccess.mspxlast_img read more

Embracing Social Computing….can it reduce security risks?

first_imgNext week I’ll post how Intel IT is embarking on a radical five-year redesign of Intel’s information security architecture, so keep comin’ back! Back in March, I posted a blog on the first of five very short (~1 minute) vblogs with our CISO, Malcolm Harkins. Malcom has some very distinct ideas on some hot industry topics, like cloud computing, IT consumerization, social computing, etc. The first vblog gave Malcolm’s perspective on security and cloud computing, the second one, embedded below, gives Malcolm’s unique point of view on security and social computing. In this vblog, Malcolm suggests we embrace social media and social computing to reduce risk.last_img read more

#IDF13 Opens With Small is Big Message

first_imgRenee also discussed the ability to make automobile headlights smart for safer driving, building smart cities and capturing enough information to cost efficiently human genome sequences to address cancer treatment more effectively.It was a motivational kickoff to what has already been an exciting conference. Follow coverage live on twitter with #IDF13.Tomorrow I will recap some of the insights I’m gaining for mobility and business.Chris@chris_p_intel If there was one theme that came though big at #IDF13 in the Brian Krazanich (BK) and Renee James keynote this morning was that size matters. The smaller the device the more personal and scalable the use case bigger the data challenge and the bigger the opportunity to change the world.BK opened with the dramatic innovation that is happening at Intel emphasizing the importance of SOC (System on a Chip) integration – demonstrating how Moore’s Law is enabling this future reduction in size of chips.  After showcasing the latest latest 22nm and 14nm products powering everything from powerful data center infrastructure to 60 new 2 in 1 personal computing devices to Atom based phones to a new architecture called Quark, that is 1/5th  the size and 1/10th the power of Atom to enable the Internet of Things.The message was about the power and opportunity for integration: from technology integration to delivering integrated computing experiences.  As Intel moves to an SOC focused approach to innovation core CPU real estate on today’s chips are taking up less and less of the overall silicon footprint as new capabilities are integrated. This integration is enabling the delivery of advanced system capability to smaller devices that are located virtually everywhere.Renee showed how we can take full advantage of this approach describing an integrated computing solution exhibited in healthcare for wearable technology where a current bedside health monitoring system is being replaced by a wristband and in the future implanted technology on a patient so that we can move patient care out of the hospital, reducing the cost and improving quality of care.last_img read more

Blueprint: SDN’s Impact on Data Center Power/Cooling Costs

first_imgThe growing interest in software-defined networking (SDN) is understandable. Compared to traditional static networking approaches, the inherent flexibility of SDN compliments highly virtualized systems and environments that can expand or contract in an efficient business oriented way. That said, flexibility is not the main driver behind SDN adoption. Early adopters and industry watchers cite cost as a primary motivation.SDN certainly offers great potential for simplifying network configuration and management, and raising the overall level of automation. However, SDN will also introduce profound changes to the data center. Reconfiguring networks on the fly introduces fluid conditions within the data center.How will the more dynamic infrastructures impact critical data center resources – power and cooling?How Infrastructure Impedes the Data CenterIn the past, 20 to 40 percent of data center resources were typically idle at any given time and yet still drawing power and dissipating heat. As energy costs have risen over the years, data centers have had to pay more attention to this waste and look for ways to keep the utility bills within budget. For example, many data centers have bumped up the thermostat to save on cooling costs.These types of easy fixes, however, quickly fall short in the data centers associated with highly dynamic infrastructures. As network configurations change, so do the workloads on the servers, and network optimization must therefore take into consideration the data center impact.Modern energy management solutions equip data center managers to solve this problem. They make it possible to see the big picture for energy use in the data center, even in environments that are continuously changing. Holistic in nature, the best-in-class solutions automate the real-time gathering of power levels throughout the data center as well as server inlet temperatures for fine-grained visibility of both energy and temperature. This information is provided by today’s data center equipment, and the energy management solutions make it possible to turn this information into cost-effective management practices.Finding the Right Solution for Your Data CenterEnergy management solutions can give IT intuitive, graphical views of both real-time and historical data. The visual maps make it easy to identify and understand the thermal zones and energy usage patterns for a row or group of racks within one or multiple data center sites. Collecting and analyzing this information makes it possible to evolve very proactive practices for data center and infrastructure management. For example, hot spots can be identified early, before they damage equipment or disrupt services. Logged data can be used to optimize rack configurations and server provisioning in response to network changes or for capacity planning.Some of the same solutions that automate monitoring can also introduce control features. Server power capping can be introduced to ensure that any workload shifts do not result in harmful power spikes. Power thresholds make it possible to identify and adjust conditions to extend the life of the infrastructure. To control server performance and quality of service, advanced energy management solutions also make it possible to balance power and server processor operating frequencies. The combination of power capping and frequency adjustments gives data center managers the ability to intelligently control and automate the allocation of server assets within a dynamic environment.Early deployments are validating the potential for SDN, but data center managers should take time to consider the indirect and direct impacts of this or any disruptive technology so that expectations can be set accordingly. SDN is just one trend that puts more pressure on IT to be able to do more with less.The Urgency of OptimizationManagement expects to see costs go down; users expect to see 100% uptime for the services they need to do their jobs. More than ever, IT needs the right tools to oversee the resources they are being asked to deploy and configure more rapidly. They need to know the impacts of any change on the resource allocations within the data center.IT teams planning for SDN must also consider the increasing regulations and availability restrictions relating to energy in various locations and regions. Some utility companies are already unable to meet the service levels required by some data centers, regardless of price. Over-provisioning can no longer be considered a practical safety net for new deployments.Regular evaluations of the energy situation in the data center should be a standard practice for technology planning. Holistic energy management solutions give data center managers many affordable tools for those efforts. Today’s challenge is to accurately assess technology trends before any pilot testing begins, and leverage an energy management solution that can minimize the pain points of any new technology project such as SDN.This article originally appeared on Converge Digests Monday, October 13, 2014last_img read more

An HPC Breakthrough with Argonne National Laboratory, Intel, and Cray

first_imgRandy Hultgren (Photo Courtesy of Argonne National Laboratory)After the press conference, Mark Seager (Intel Fellow, CTO of the Tech Computing Ecosystem) contributed: “We are defining the next era of supercomputing.” While Al Gara (Intel Fellow, Chief Architect of Exascale Systems) took it a step further with: “Intel is not only driving the architecture of the system, but also the new technologies that have emerged (or will be needed) to enable that architecture. We have the expertise to drive silicon, memory, fabric and other technologies forward and bring them together in an advanced system.”The Intel and Cray teams prepping for the Aurora announcementAurora’s disruptive technologies are designed to work together to deliver breakthroughs in performance, energy efficiency, overall system throughput and latency, and cost to power. This signals the convergence of traditional supercomputing and the world of big data and analytics that will drive impact for not only the HPC industry, but also more traditional enterprises.Argonne scientists – who have a deep understanding of how to create software applications that maximize available computing resources – will use Aurora to accelerate discoveries surrounding:Materials science: Design of new classes of materials that will lead to more powerful, efficient and durable batteries and solar panels.Biological science: Gaining the ability to understand the capabilities and vulnerabilities of new organisms that can result in improved biofuels and more effective disease control.Transportation efficiency: Collaborating with industry to improve transportation systems to design enhanced aerodynamics features, as well as enable production of better, more highly-efficient and quieter engines.Renewable energy: Wind turbine design and placement to greatly improve efficiency and reduce noise.Alternative programming models: Partitioned Global Address Space (PGAS) as a basis for Coarray Fortran and other unified address space programming models.The Argonne Training Program on Extreme-Scale computing will be a key program for training the next generation of code developers – having them ready to drive science from day one when Aurora is made available to research institutions around the world.For more information on the announcement, you can head to our new Aurora webpage or dig deeper into Intel’s HPC scalable system framework.© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. At a press event on April 9, representatives from the U.S. Department of Energy announced they awarded Intel contracts for two supercomputers totaling (just over) $200 million as part of its CORAL program. Theta, an early production system, will be delivered in 2016 and will scale to 8.5 petaFLOPS and more than 2,500 nodes, while the 180 PetaFLOPS, greater than 50,000 node system called Aurora will be delivered in 2018. This represents a strong collaboration for Argonne National Laboratory, prime contractor Intel, and sub-contractor Cray on a highly scalable and integrated system that will accelerate scientific and engineering breakthroughs.Rendering of AuroraDave Patterson (President of Intel Federal LLC and VP of the Data Center Group), led the Intel team on the ground in Chicago; he was joined on stage by Peter Littlewood (Director of Argonne National Laboratory), Lynn Orr (Undersecretary for Science and Energy, U.S. Department of Energy), and Barry Bolding (Vice President of Marketing and Business Development for Cray). Also joining the press conference were Dan Lipinski (U.S. Representative, Illinois District 3), Bill Foster (U.S. Representative, Illinois District 11), and Randy Hultgren (U.S. Representative, Illinois District 14).Dave Patterson at the Aurora Announcement (Photo Courtesy of Argonne National Laboratory)This cavalcade of company representatives disclosed details on the Aurora 180 PetaFLOPS/50,000 node/13 Megawatt system. It utilizes much of the Intel product portfolio via Intel’s HPC scalable system framework, including future Intel Xeon Phi processors (codenamed Knights Hill), second generation Intel Omni-Path Fabric, a new memory hierarchy composed of Intel Lustre, Burst Buffer Storage, and persistent memory through high bandwidth on-package memory. The system will be built using Cray’s next generation Shasta platform.Peter Littlewood kicked off the press conference by welcoming everyone and discussing Argonne National Laboratory – the mid west’s largest federally funded R&D center fostering discoveries in energy, transportation, protecting the nation and more. He handed off to Lynn Orr, who made the announcement of the $200 million contract and the Aurora and Theta supercomputers. He discussed some of the architectural details of Aurora and talked about the need for the U.S. to dedicate funds to build supercomputers to reach the next exascale echelon and how that will fuel scientific discovery, a theme echoed by many of the speakers to come.Dave Patterson took the stage to give background on Intel Federal, a wholly owned subsidiary of Intel Corporation. In this instance, Intel Federal conducted the contract negotiations for CORAL. Dave touched on the robust collaboration with Argonne and Cray needed to bring Aurora on line in 2018, as well as introducing Intel’s HPC scalable system framework – a flexible blueprint for developing high performance, balanced, power-efficient and reliable systems capable of supporting both compute- and data-intensive workloads.Next up, Barry Bolding from Cray talked about the platform system underpinning Aurora – the next generation Shasta platform. He mentioned that when deployed, Aurora has the potential to be one of the largest/most productive supercomputers in the world.And finally, Dan Lipinski, Bill Foster and Randy Hultgren, all representing Illinois (Argonne’s home base) in the U.S. House of Representatives each gave a few short remarks. They echoed Lynn Orr’s previous thoughts that the United States needs to stay committed to building cutting edge supercomputers to stay competitive in a global environment and tackle the next wave of scientific discoveries. Representative Hultgren expressed very succinctly: “[The U.S.] needs big machines that can handle big jobs.”Dan Lipinski (Photo Courtesy of Argonne National Laboratory)Bill Foster (Photo Courtesy of Argonne National Laboratory)last_img read more

Why Purpose Matters In Mobile Analytics Design

first_imgA rise in the use of mobile devices and applications has heightened the demand for organizations to elevate their plans to deliver mobile analytics solutions. However, designing mobile analytics solutions without understanding your audience and purpose can sometimes backfire.I frequently discover that in mobile analytics projects, understanding the purpose is where we take things for granted and fall short—not because we don’t have the right resources to understand it better, but because we tend to form the wrong assumptions. Better understanding of the “mobile purpose” is critical for success and we need to go beyond just accepting the initial request at the onset of our engagements.The Merriam-Webster dictionary defines the purpose as “the reason why something is done or used: the aim or intention of something.” Although the reasons for a mobile analytics projects may appear obvious on the surface, a re-evaluation of the initial assumptions can often prove to be invaluable both for the design and longevity of mobile projects.Here are a few points to keep in mind before you schedule your first meeting or lay down a single line of code.Confirm link to strategyI often talk about the importance of executive sponsorship. There’s no better person than the executive sponsor to provide guidance and validation. When it comes to technology projects (and mobile analytics is no different), our engagements need to be linked directly to our strategy. We must make sure that everything we do contributes to our overall business goal.Consider the relevanceIs it relevant? It’s a simple question, yet we have a tendency to take it for granted and overlook its significance. It doesn’t matter whether we’re designing a strategy for mobile analytics or a simple mobile report—relevance matters.Moreover, it isn’t enough just to study its current application. We need to ask: Will it be relevant by the time we deliver? Even with rapid deployment solutions and the use of agile project methodologies, there’s a risk that certain requirements may become irrelevant if current business processes that mobile analytics depends on change or your mobile analytics solution highlights gaps that may require a redesign of your business processes. In the end, what we do must be relevant both now and when we Go Live.Understand the contextUnderstanding the context is crucial, because everything we do and design will be interpreted according to the context in which the mobile analytics project is managed or the mobile solutions are delivered. When we talk about context in mobile analytics, we mustn’t think only about the data consumed on the mobile device, but also how that data is consumed and why it was required in the first place.We’re also interested in going beyond the what to further examine the why and how. Why is this data or report relevant? How can I make it more relevant?Finding these answers requires that you get closer to current or potential customers (mobile users) by involving them actively in the process from day one. You need to closely observe their mobile interactions so you can validate your assumptions about the use cases and effectively identify gaps where they may exist.Bottom line: Focus on the business valueUltimately, it all boils down to this: What is the business value?Is it insight into operations so we can improve productivity? Is it cost savings through early detection and preventive actions? Is it increased sales as a result of identifying new opportunities?What we design and how we design will directly guide and influence many of these outcomes. If we have confirmed the link to strategy, considered the relevance, and understood the context, then we have all the right ingredients to effectively deliver business value.In the absence of these pieces, our value proposition won’t pass muster.Stay tuned for my next blog in the Mobile Analytics Design series.You may also like the Mobile BI Strategy series on IT Peer Network.Connect with me on Twitter @KaanTurnali, LinkedIn and here on the IT Peer Network.A version of this post was originally published on turnali.com and also appeared on the SAP Analytics Bloglast_img read more

Cloud for All Industry Collaborations Surge at Cloud Day

first_imgSince we announced the Cloud for All initiative last summer, Intel engineering teams have been hard at work driving industry collaborations.  Our goal, unleashing tens of thousands of new clouds through delivery of easy to deploy and manage solutions to the marketplace.  With Cloud for All, we have assembled the largest developer team working on OpenStack. We’ve opened the OpenStack Innovation Center with Rackspace and the OpenStack Foundation, and developers are actively testing code at scale.  We’ve addressed some of the key enterprise feature gaps in the OpenStack platform today with tens of thousands of lines of code submitted addressing capabilities such as high availability for tenants and services, network and storage support, and ease of deployment.  We’ve launched a series of bug smashing events with the OpenStack community and have addressed over a quarter of the known bugs to date.  We’ve worked with industry leaders such as CoreOS, Mesosphere, Microsoft, Mirantis, RedHat, SUSE, and VMware to deliver optimized SDI stacks and have integrated them with industry partners into technical reference architectures ready for customer deployment.  Finally, we’ve delivered nearly 100 POCs and trials of these solutions with successful results in TCO reduction and agility increase.This has certainly been excellent progress, but with Cloud Day we’re unveiling a number of new collaborations to drive deeper investment in the industry SDI stack evolution.  We are optimizing software with underlying Intel architecture capabilities while accelerating successful deployments in enterprise data centers.  In other words, we have hit the turbo button on Cloud for All and our work with the industry to democratize cloud.Let me start with investment in SDI solutions.  Enterprise customers have been looking for opportunities to run traditional VM based workloads alongside containerized applications with the ability to seamlessly integrate both into their environments. That’s why we’re so delighted by the announcement of a collaboration with CoreOS and Mirantis to integrate Kubernetes and OpenStack into a single open source SDI set of capabilities, eliminating confusion caused by solution fragmentation and giving our customers a clear path to deployment of mixed VM and container environments.Kubernetes was initially delivered by Google to encourage broad proliferation of container app development models, and CoreOS, the company who has delivered the Tectonic solution to the market, makes an ideal partner for leading the Kubernetes development.  Mirantis, a pure play OpenStack vendor, brings tried and true expertise on the OpenStack platform to the collaboration.  We will be offering all software as upstream contributions benefitting the entire community towards accelerated SDI innovation. Expect more details on this collaboration at the OpenStack Summit in Austin.Equally as critical as open source platform innovation is optimization of the SDI stack with underlying infrastructure.  We announced the creation of the Intel Cloud Builders program today with over 30 industry leaders to spur broad optimization of SDI and drive integration of these solutions into the market.  The program will offer developer education opportunities, drive optimization efforts into deployable reference architectures, and integration of available solutions with end user POCs and trials. We’ve published 20 reference architectures today with the launch, and these can be viewed at cloudbuilders.intel.com.Examples of the type of optimization efforts driven through Intel Cloud Builders are two software collaborations announced today focused on Snap, our open source telemetry framework that provides access to data on the states and capabilities of underlying infrastructure enabling more acute awareness and control of data center infrastructure.  We announced a collaboration with raintank to integrate snap into Grafana.  I hosted Torkel Odegaard, Co-Founder of raintank, onstage to share a demo of Grafana at work, and Torkel surprised the crowd with an announcement of the new Grafana 3.0 and launch of Grafana.net delivering this new capability to the hundreds of thousands of current Grafana users.  We also announced that leading provider Iron.io will be integrating snap into their future software platforms.At the end of the day, we collaborate with the industry to drive deployment of cloud solutions and bring the agility and efficiency of the cloud model to the mainstream. This is why our announcement with VMware is an important aspect of our strategy to drive broad cloud deployments in the enterprise this year.  We announced the creation of a network of Centers of Excellence, resources for end customers to get custom configuration support, run POCs, and access reference architectures to speed deployments on premises.  We expect hundreds of customers to access the COEs this year with the first COE opening its doors today for customer engagement.  Additionally, we’ve teamed up with NIST to drive the latest cybersecurity standards and compliance into the COEs offering customers a unique opportunity to leverage the latest security technologies from both companies into their deployments.If you’re excited as I am about the progress we’re making with accelerated industry innovation towards unleashing tens of thousands of clouds, I encourage you to engage with us on this journey.  If you’re coding open source software for SDI, engage with us in the OpenStack Innovation Center, learn more about our CoreOS, Mirantis collaboration, or get connected with us on github with snap. If you’re a hardware or software company working in this space, consider membership in Intel Cloud Builders, and start talking to us about collaborations to deliver optimized solutions to the market.  If you’re looking to deploy cloud solutions, check out our solutions catalog and talk to us about a POC or trial.  Cloud is the greatest disruptor of business today, and we’re looking forward to working with you to drive the democratization of cloud to deliver this disruptive power to all.last_img read more

IPCC Expresses “Regret” Over Glaciers Error

first_imgResponding to growing criticism, the Intergovernmental Panel on Climate Change (IPCC) today admitted an error in a working group report that was part of its massive 2007 global review. A statement that the Himalayan glaciers were “very likely” to melt by 2035 was based on “poorly substantiated” estimates, IPCC said. This forecast, which appears to have been based on the non-peer-reviewed work of a single researcher, was challenged publicly last November by an Indian government report written by geologist Vijay Kumar Raina. In its statement, IPCC, led by Rajendra Pachauri, expressed “regret” for “poor application of well established procedures” in the drafting of the 2007 working group document.last_img read more

The Latest Buzz: Orszag Budgets for Caffeine Genetic Marker

first_imgPeter Orszag, the high-energy director of the White House Office of Management and Budget, recently shared some good news with Politico. No, he hasn’t solved the country’s economic problems. But he’s learned that his genetics won’t be crimping a caffeine habit that fuels 80-hour work weeks spent trying to erase a trillion-dollar deficit. While attending a conference, Orszag learned from biologist Craig Venter that he could get screened for a genetic marker that can raise the risk of heart disease when lots of caffeine is consumed. Orszag, who drinks vast amounts of Diet Coke, went ahead with the test. Luckily for him he’s in the clear. “If that test had come out the wrong way, you would not have wanted to be around me afterwards, because to give up caffeine would have been very painful,” Orszag told Politico reporter Mike Allen. The Office of Managament and Budget said that Orszag was traveling today and couldn’t provide additional details-including whether he’d learned anything about his genetic predisposition to other diseases. But ScienceInsider guesses that he was referring to a single nucleotide polymorphism (SNP) called rs762551 that modulates a caffeine metabolizing enzyme in the liver. Those with a “slow” metabolizing version who drink a few cups of coffee a day are at a higher risk of heart attacks. Sign up for our daily newsletterGet more great content like this delivered right to you!Country *AfghanistanAland IslandsAlbaniaAlgeriaAndorraAngolaAnguillaAntarcticaAntigua and BarbudaArgentinaArmeniaArubaAustraliaAustriaAzerbaijanBahamasBahrainBangladeshBarbadosBelarusBelgiumBelizeBeninBermudaBhutanBolivia, Plurinational State ofBonaire, Sint Eustatius and SabaBosnia and HerzegovinaBotswanaBouvet IslandBrazilBritish Indian Ocean TerritoryBrunei DarussalamBulgariaBurkina FasoBurundiCambodiaCameroonCanadaCape VerdeCayman IslandsCentral African RepublicChadChileChinaChristmas IslandCocos (Keeling) IslandsColombiaComorosCongoCongo, The Democratic Republic of theCook IslandsCosta RicaCote D’IvoireCroatiaCubaCuraçaoCyprusCzech RepublicDenmarkDjiboutiDominicaDominican RepublicEcuadorEgyptEl SalvadorEquatorial GuineaEritreaEstoniaEthiopiaFalkland Islands (Malvinas)Faroe IslandsFijiFinlandFranceFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabonGambiaGeorgiaGermanyGhanaGibraltarGreeceGreenlandGrenadaGuadeloupeGuatemalaGuernseyGuineaGuinea-BissauGuyanaHaitiHeard Island and Mcdonald IslandsHoly See (Vatican City State)HondurasHong KongHungaryIcelandIndiaIndonesiaIran, Islamic Republic ofIraqIrelandIsle of ManIsraelItalyJamaicaJapanJerseyJordanKazakhstanKenyaKiribatiKorea, Democratic People’s Republic ofKorea, Republic ofKuwaitKyrgyzstanLao People’s Democratic RepublicLatviaLebanonLesothoLiberiaLibyan Arab JamahiriyaLiechtensteinLithuaniaLuxembourgMacaoMacedonia, The Former Yugoslav Republic ofMadagascarMalawiMalaysiaMaldivesMaliMaltaMartiniqueMauritaniaMauritiusMayotteMexicoMoldova, Republic ofMonacoMongoliaMontenegroMontserratMoroccoMozambiqueMyanmarNamibiaNauruNepalNetherlandsNew CaledoniaNew ZealandNicaraguaNigerNigeriaNiueNorfolk IslandNorwayOmanPakistanPalestinianPanamaPapua New GuineaParaguayPeruPhilippinesPitcairnPolandPortugalQatarReunionRomaniaRussian FederationRWANDASaint Barthélemy Saint Helena, Ascension and Tristan da CunhaSaint Kitts and NevisSaint LuciaSaint Martin (French part)Saint Pierre and MiquelonSaint Vincent and the GrenadinesSamoaSan MarinoSao Tome and PrincipeSaudi ArabiaSenegalSerbiaSeychellesSierra LeoneSingaporeSint Maarten (Dutch part)SlovakiaSloveniaSolomon IslandsSomaliaSouth AfricaSouth Georgia and the South Sandwich IslandsSouth SudanSpainSri LankaSudanSurinameSvalbard and Jan MayenSwazilandSwedenSwitzerlandSyrian Arab RepublicTaiwanTajikistanTanzania, United Republic ofThailandTimor-LesteTogoTokelauTongaTrinidad and TobagoTunisiaTurkeyTurkmenistanTurks and Caicos IslandsTuvaluUgandaUkraineUnited Arab EmiratesUnited KingdomUnited StatesUruguayUzbekistanVanuatuVenezuela, Bolivarian Republic ofVietnamVirgin Islands, BritishWallis and FutunaWestern SaharaYemenZambiaZimbabweI also wish to receive emails from AAAS/Science and Science advertisers, including information on products, services and special offers which may include but are not limited to news, careers information & upcoming events.Required fields are included by an asterisk(*) 23andme, the consumer gene testing company in Mountainview, California, tests for rs76255 along with many other SNPs. And it’s not just for policy wonks. National Institutes of Health Director Francis Collins, Harvard University psychologist Steven Pinker, and DNA discoverer James Watson all have had part of their genomes decoded.last_img read more

“End Homeopathy on NHS,” Say British MPs

first_imgIn a report released today, the United Kingdom’s House of Commons Science and Technology Committee has decided that homeopathy is nothing more than a placebo and should not be provided by the National Health Service, as it has been since its inception in 1948. The panel also recommended that the Medicines and Healthcare products Regulatory Agency—the U.K. drug safety watchdog—should stop licensing over-the-counter homeopathic medications that have not demonstrated their effectiveness in randomized controlled trials (i.e., all of them). These verdicts aren’t a complete shock since the House of Commons Science and Technology Committee comprises Members of Parliament who have chosen to sit on a committee established to monitor the scientific evidence base for government policy. The panel even went as far as to dismiss calls for further research, concluding that “there has been enough testing of homeopathy and plenty of evidence showing that it is not efficacious.” Furthermore, the report accuses the British Homeopathic Association (BHA), which had submitted evidence to the panel, of cherry-picking, and even, in one case actively misrepresenting, research into the treatment (a famous study that concluded its findings were “compatible with the notion that the clinical effects of homeopathy are placebo effects” was cited by the BHA as evidence of the treatment’s efficacy.) The MPs conclude that “advocates of homeopathy … choose to rely on, and promulgate, selective approaches to the treatment of the evidence base.” Sign up for our daily newsletterGet more great content like this delivered right to you!Country *AfghanistanAland IslandsAlbaniaAlgeriaAndorraAngolaAnguillaAntarcticaAntigua and BarbudaArgentinaArmeniaArubaAustraliaAustriaAzerbaijanBahamasBahrainBangladeshBarbadosBelarusBelgiumBelizeBeninBermudaBhutanBolivia, Plurinational State ofBonaire, Sint Eustatius and SabaBosnia and HerzegovinaBotswanaBouvet IslandBrazilBritish Indian Ocean TerritoryBrunei DarussalamBulgariaBurkina FasoBurundiCambodiaCameroonCanadaCape VerdeCayman IslandsCentral African RepublicChadChileChinaChristmas IslandCocos (Keeling) IslandsColombiaComorosCongoCongo, The Democratic Republic of theCook IslandsCosta RicaCote D’IvoireCroatiaCubaCuraçaoCyprusCzech RepublicDenmarkDjiboutiDominicaDominican RepublicEcuadorEgyptEl SalvadorEquatorial GuineaEritreaEstoniaEthiopiaFalkland Islands (Malvinas)Faroe IslandsFijiFinlandFranceFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabonGambiaGeorgiaGermanyGhanaGibraltarGreeceGreenlandGrenadaGuadeloupeGuatemalaGuernseyGuineaGuinea-BissauGuyanaHaitiHeard Island and Mcdonald IslandsHoly See (Vatican City State)HondurasHong KongHungaryIcelandIndiaIndonesiaIran, Islamic Republic ofIraqIrelandIsle of ManIsraelItalyJamaicaJapanJerseyJordanKazakhstanKenyaKiribatiKorea, Democratic People’s Republic ofKorea, Republic ofKuwaitKyrgyzstanLao People’s Democratic RepublicLatviaLebanonLesothoLiberiaLibyan Arab JamahiriyaLiechtensteinLithuaniaLuxembourgMacaoMacedonia, The Former Yugoslav Republic ofMadagascarMalawiMalaysiaMaldivesMaliMaltaMartiniqueMauritaniaMauritiusMayotteMexicoMoldova, Republic ofMonacoMongoliaMontenegroMontserratMoroccoMozambiqueMyanmarNamibiaNauruNepalNetherlandsNew CaledoniaNew ZealandNicaraguaNigerNigeriaNiueNorfolk IslandNorwayOmanPakistanPalestinianPanamaPapua New GuineaParaguayPeruPhilippinesPitcairnPolandPortugalQatarReunionRomaniaRussian FederationRWANDASaint Barthélemy Saint Helena, Ascension and Tristan da CunhaSaint Kitts and NevisSaint LuciaSaint Martin (French part)Saint Pierre and MiquelonSaint Vincent and the GrenadinesSamoaSan MarinoSao Tome and PrincipeSaudi ArabiaSenegalSerbiaSeychellesSierra LeoneSingaporeSint Maarten (Dutch part)SlovakiaSloveniaSolomon IslandsSomaliaSouth AfricaSouth Georgia and the South Sandwich IslandsSouth SudanSpainSri LankaSudanSurinameSvalbard and Jan MayenSwazilandSwedenSwitzerlandSyrian Arab RepublicTaiwanTajikistanTanzania, United Republic ofThailandTimor-LesteTogoTokelauTongaTrinidad and TobagoTunisiaTurkeyTurkmenistanTurks and Caicos IslandsTuvaluUgandaUkraineUnited Arab EmiratesUnited KingdomUnited StatesUruguayUzbekistanVanuatuVenezuela, Bolivarian Republic ofVietnamVirgin Islands, BritishWallis and FutunaWestern SaharaYemenZambiaZimbabweI also wish to receive emails from AAAS/Science and Science advertisers, including information on products, services and special offers which may include but are not limited to news, careers information & upcoming events.Required fields are included by an asterisk(*)last_img read more

White House Rally to Promote Science Education

first_imgOn Thursday the White House will host the third public event in an ongoing campaign to encourage the private sector to invest in precollege science and math education. The initiative, dubbed Educate to Innovate, showcases new and ongoing activities by companies, foundations, and professional organizations to train better teachers, raise student performance, and increase interest in STEM (science, technology, engineering, and mathematics) education. Last fall’s version, for example, launched National Lab Day, a Web site to link up teachers and scientists and engineers who want to volunteer in the classroom. This week’s event is expected to showcase a new report on K–12 STEM education by the President’s Council of Advisors on Science and Technology.last_img read more

White House Launches New Tally of STEM Education Programs

first_imgAn inventory by the Bush Administration of federal efforts to bolster science, technology, engineering, and mathematics (STEM) education was too simplistic to be useful, according to a White House official. So the Obama Administration plans to repeat the exercise—but with more analysis—as a first step toward improving STEM programs. In 2007, the Bush-era Academic Competitiveness Council (ACC) found that 12 agencies were spending a combined $3.1 billion a year on 105 programs covering education and training at all levels, from elementary school through postgraduate studies and including outreach efforts. The council’s tally was supposed to lead to a streamlining of government operations. But Carl Wieman, associate director for science within the White House Office of Science and Technology Policy, says that ACC wasn’t helpful and that a new White House panel will take another shot. “The ACC is a list of programs, basically,” Wieman tells ScienceInsider. “You need something more nuanced than simply labeling it STEM education. Because that leads people to ask, ‘You’re spending all this money, why don’t we have great STEM education?’ The reality is that [these programs] do a large variety of different things, from graduate fellowships at the Nuclear Regulatory Commission to an introduction to science for kindergarteners. Different parts of different agencies do things that are important to their mission.” Sign up for our daily newsletterGet more great content like this delivered right to you!Country *AfghanistanAland IslandsAlbaniaAlgeriaAndorraAngolaAnguillaAntarcticaAntigua and BarbudaArgentinaArmeniaArubaAustraliaAustriaAzerbaijanBahamasBahrainBangladeshBarbadosBelarusBelgiumBelizeBeninBermudaBhutanBolivia, Plurinational State ofBonaire, Sint Eustatius and SabaBosnia and HerzegovinaBotswanaBouvet IslandBrazilBritish Indian Ocean TerritoryBrunei DarussalamBulgariaBurkina FasoBurundiCambodiaCameroonCanadaCape VerdeCayman IslandsCentral African RepublicChadChileChinaChristmas IslandCocos (Keeling) IslandsColombiaComorosCongoCongo, The Democratic Republic of theCook IslandsCosta RicaCote D’IvoireCroatiaCubaCuraçaoCyprusCzech RepublicDenmarkDjiboutiDominicaDominican RepublicEcuadorEgyptEl SalvadorEquatorial GuineaEritreaEstoniaEthiopiaFalkland Islands (Malvinas)Faroe IslandsFijiFinlandFranceFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabonGambiaGeorgiaGermanyGhanaGibraltarGreeceGreenlandGrenadaGuadeloupeGuatemalaGuernseyGuineaGuinea-BissauGuyanaHaitiHeard Island and Mcdonald IslandsHoly See (Vatican City State)HondurasHong KongHungaryIcelandIndiaIndonesiaIran, Islamic Republic ofIraqIrelandIsle of ManIsraelItalyJamaicaJapanJerseyJordanKazakhstanKenyaKiribatiKorea, Democratic People’s Republic ofKorea, Republic ofKuwaitKyrgyzstanLao People’s Democratic RepublicLatviaLebanonLesothoLiberiaLibyan Arab JamahiriyaLiechtensteinLithuaniaLuxembourgMacaoMacedonia, The Former Yugoslav Republic ofMadagascarMalawiMalaysiaMaldivesMaliMaltaMartiniqueMauritaniaMauritiusMayotteMexicoMoldova, Republic ofMonacoMongoliaMontenegroMontserratMoroccoMozambiqueMyanmarNamibiaNauruNepalNetherlandsNew CaledoniaNew ZealandNicaraguaNigerNigeriaNiueNorfolk IslandNorwayOmanPakistanPalestinianPanamaPapua New GuineaParaguayPeruPhilippinesPitcairnPolandPortugalQatarReunionRomaniaRussian FederationRWANDASaint Barthélemy Saint Helena, Ascension and Tristan da CunhaSaint Kitts and NevisSaint LuciaSaint Martin (French part)Saint Pierre and MiquelonSaint Vincent and the GrenadinesSamoaSan MarinoSao Tome and PrincipeSaudi ArabiaSenegalSerbiaSeychellesSierra LeoneSingaporeSint Maarten (Dutch part)SlovakiaSloveniaSolomon IslandsSomaliaSouth AfricaSouth Georgia and the South Sandwich IslandsSouth SudanSpainSri LankaSudanSurinameSvalbard and Jan MayenSwazilandSwedenSwitzerlandSyrian Arab RepublicTaiwanTajikistanTanzania, United Republic ofThailandTimor-LesteTogoTokelauTongaTrinidad and TobagoTunisiaTurkeyTurkmenistanTurks and Caicos IslandsTuvaluUgandaUkraineUnited Arab EmiratesUnited KingdomUnited StatesUruguayUzbekistanVanuatuVenezuela, Bolivarian Republic ofVietnamVirgin Islands, BritishWallis and FutunaWestern SaharaYemenZambiaZimbabweI also wish to receive emails from AAAS/Science and Science advertisers, including information on products, services and special offers which may include but are not limited to news, careers information & upcoming events.Required fields are included by an asterisk(*) Wieman says the new panel, under the White House’s National Science and Technology Council, will look at “what these programs do, how they fit together, and how well they match what we feel are best practices.” He says that the review will rely heavily “on evidence and on what we know about learning.”last_img read more

Europe Downscales Monster Telescope to Save Money

first_img The world’s biggest telescope is getting smaller—but more affordable. The designers of the future European Extremely Large Telescope (E-ELT) have decided to shrink the telescope’s primary mirror from a diameter of 42 meters to 39.3 meters. The resulting 13% decrease in sensitivity is likely to reduce its scientific payoff. But the 18% savings in its overall cost gives the telescope a better chance to remain on schedule for first light in 2022. The decrease in mirror diameter has not yet been officially announced. “But it’s part of the new design study that will be presented to the ESO Council,” says Tim de Zeeuw, director general of the European Southern Observatory. ESO plans to build the telescope at Cerro Armazones, a 3064-meter-high peak close to its existing Very Large Telescope, which consists of four identical 8.2-meter instruments. ESO’s governing council is expected to make a final decision on the project in December. Apart from a smaller primary mirror (which will consist of hundreds of hexagonal segments), the new E-ELT will also sport a significantly smaller secondary mirror (4.2 meters instead of 5.9 meters), and a smaller and more compact overall structure. The new dimensions will make it harder for the E-ELT to accomplish one of its primary goals, to image Earthlike planets orbiting other stars than the sun. “It hurts,” says de Zeeuw, “but we’ve been able to bring the projected costs down from €1275 to €1055 million [US $1.5 billion]. This will enable us to build the instrument in 10 or 11 years.” Sign up for our daily newsletterGet more great content like this delivered right to you!Country *AfghanistanAland IslandsAlbaniaAlgeriaAndorraAngolaAnguillaAntarcticaAntigua and BarbudaArgentinaArmeniaArubaAustraliaAustriaAzerbaijanBahamasBahrainBangladeshBarbadosBelarusBelgiumBelizeBeninBermudaBhutanBolivia, Plurinational State ofBonaire, Sint Eustatius and SabaBosnia and HerzegovinaBotswanaBouvet IslandBrazilBritish Indian Ocean TerritoryBrunei DarussalamBulgariaBurkina FasoBurundiCambodiaCameroonCanadaCape VerdeCayman IslandsCentral African RepublicChadChileChinaChristmas IslandCocos (Keeling) IslandsColombiaComorosCongoCongo, The Democratic Republic of theCook IslandsCosta RicaCote D’IvoireCroatiaCubaCuraçaoCyprusCzech RepublicDenmarkDjiboutiDominicaDominican RepublicEcuadorEgyptEl SalvadorEquatorial GuineaEritreaEstoniaEthiopiaFalkland Islands (Malvinas)Faroe IslandsFijiFinlandFranceFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabonGambiaGeorgiaGermanyGhanaGibraltarGreeceGreenlandGrenadaGuadeloupeGuatemalaGuernseyGuineaGuinea-BissauGuyanaHaitiHeard Island and Mcdonald IslandsHoly See (Vatican City State)HondurasHong KongHungaryIcelandIndiaIndonesiaIran, Islamic Republic ofIraqIrelandIsle of ManIsraelItalyJamaicaJapanJerseyJordanKazakhstanKenyaKiribatiKorea, Democratic People’s Republic ofKorea, Republic ofKuwaitKyrgyzstanLao People’s Democratic RepublicLatviaLebanonLesothoLiberiaLibyan Arab JamahiriyaLiechtensteinLithuaniaLuxembourgMacaoMacedonia, The Former Yugoslav Republic ofMadagascarMalawiMalaysiaMaldivesMaliMaltaMartiniqueMauritaniaMauritiusMayotteMexicoMoldova, Republic ofMonacoMongoliaMontenegroMontserratMoroccoMozambiqueMyanmarNamibiaNauruNepalNetherlandsNew CaledoniaNew ZealandNicaraguaNigerNigeriaNiueNorfolk IslandNorwayOmanPakistanPalestinianPanamaPapua New GuineaParaguayPeruPhilippinesPitcairnPolandPortugalQatarReunionRomaniaRussian FederationRWANDASaint Barthélemy Saint Helena, Ascension and Tristan da CunhaSaint Kitts and NevisSaint LuciaSaint Martin (French part)Saint Pierre and MiquelonSaint Vincent and the GrenadinesSamoaSan MarinoSao Tome and PrincipeSaudi ArabiaSenegalSerbiaSeychellesSierra LeoneSingaporeSint Maarten (Dutch part)SlovakiaSloveniaSolomon IslandsSomaliaSouth AfricaSouth Georgia and the South Sandwich IslandsSouth SudanSpainSri LankaSudanSurinameSvalbard and Jan MayenSwazilandSwedenSwitzerlandSyrian Arab RepublicTaiwanTajikistanTanzania, United Republic ofThailandTimor-LesteTogoTokelauTongaTrinidad and TobagoTunisiaTurkeyTurkmenistanTurks and Caicos IslandsTuvaluUgandaUkraineUnited Arab EmiratesUnited KingdomUnited StatesUruguayUzbekistanVanuatuVenezuela, Bolivarian Republic ofVietnamVirgin Islands, BritishWallis and FutunaWestern SaharaYemenZambiaZimbabweI also wish to receive emails from AAAS/Science and Science advertisers, including information on products, services and special offers which may include but are not limited to news, careers information & upcoming events.Required fields are included by an asterisk(*) Astrophysicist Isobel Hook of the University of Oxford in the United Kingdom, who chairs the E-ELT Science Working Group, says that the downscaling “is not disastrous” but that there will be implications for the science. “The ultimate goal of imaging an exoplanet similar to our own Earth might still be feasible,” she says, “but it’s gonna be extremely difficult, and it will only be possible for nearby stars. The smaller size is disappointing from a scientific point of view, but we need to get on with it now. A further delay would also compromise the science.” ESO is in a race with two other consortia to be the first in the next generation of jumbo telescopes. A U.S.-led international collaboration is preparing the 24.5-meter Giant Magellan Telescope on Cerro Las Campanas in Chile, while an international consortium led by Californian institutions is planning the Thirty-Meter Telescope (TMT) on Mauna Kea, Hawaii. Astronomer Richard Ellis of the California Institute of Technology in Pasadena, who is a member of the Board of Directors for the TMT, says the E-ELT’s smaller mirror size will lead to a “quite significant loss” in its ability to study remote galaxies as well as directly image exoplanets. For observations that use adaptive optics—a complicated but essential technology to compensate for air turbulence—a small change in mirror diameter can have big consequences, he says. “Still, the E-ELT will of course be a huge gain over the Thirty-Meter Telescope,” says Ellis. The E-ELT received a financial boost in December when Brazil decided to become the 15th (and first non-European) member state of ESO. But even with Brazil’s entrance fee of €130 million (spread out over 10 years), de Zeeuw says that construction of the original 42-meter E-ELT would have taken at least 16 years at its previous price. The decision to go with a smaller mirror, he says, represents a “tremendous opportunity” for the E-ELT to see first light before the TMT. Ellis says the TMT could be completed in 2018 if commitments for the entire price tag of just over $1 billion are secured by next year. So far, however, only $300 million plus funds for the completion of a detailed design study have been raised, he says. The TMT consortium is led by the California Institute of Technology, the University of California, and the Association of Canadian Universities for Research in Astronomy. Collaborating national institutes in Japan, China, and India might find it easier to raise money if the U.S. National Science Foundation decides to become a partner in the TMT, says Ellis, at a proposed 20% share. *This item has been updated to add information about the Giant Magellan Telescope. ESO last_img read more