Technology Development – Turning Seawater into Jet Fuel

Converting the carbon dioxide and hydrogen into hydrocarbons that can then be used to develop JP-5 fuel stock. The technology has an economically viable widespread applicability.

This article first published in the New Energy & Fuel on September 25, 2012

Scientists at the U.S. Naval Research Laboratory (NRL) are developing a process to extract carbon dioxide (CO2) and produce hydrogen gas (H2) from seawater.  Then they catalytically convert the CO2 and H2 into jet fuel by a gas-to-liquids process.

The NRL effort has successfully developed and demonstrated technologies for the recovery of the CO2 and the production of the H2 from seawater using an electrochemical acidification cell, and the conversion of the CO2 and H2 to hydrocarbons that can be used to produce jet fuel.

Electrochemical Acidification Carbon Capture Skid. Click image for more info.

NRL research chemist Dr. Heather Willauer said, “The potential payoff is the ability to produce JP-5 jet fuel stock at sea reducing the logistics tail on fuel delivery with no environmental burden and increasing the Navy’s energy security and independence.”  JP-5 is very close chemically to kerosene and diesel.

Willauer continues, “The reduction and hydrogenation of CO2 to form hydrocarbons is accomplished using a catalyst that is similar to those used for Fischer-Tropsch reduction and hydrogenation of carbon monoxide. By modifying the surface composition of iron catalysts in fixed-bed reactors, NRL has successfully improved CO2 conversion efficiencies up to 60%.”

Technically, the NRL has developed a two-step laboratory process to convert the CO2 and H2 gathered from the seawater to liquid hydrocarbons. In the first step, an iron-based catalyst can achieve CO2 conversion levels up to 60% and decrease unwanted methane production from 97% to 25% in favor of longer-chain unsaturated hydrocarbons (olefins). Then in step two the olefins can be oligomerized (a chemical process that converts monomers, molecules of low molecular weight, to a compound of higher molecular weight by a finite degree of polymerization) into a liquid containing hydrocarbon molecules in the carbon C9-C16 range, suitable for conversion to jet fuel by a nickel-supported catalyst reaction.

The raw materials are abundant.  CO2 is an abundant carbon source in seawater, with the concentration in the ocean about 140 times greater than that in air. Two to three percent of the CO2 in seawater is dissolved CO2 gas in the form of carbonic acid, one percent is carbonate, and the remaining 96 to 97% is bound in bicarbonate. When processes are developed to take advantage of the higher weight per volume concentration of CO2 in seawater, coupled with more efficient catalysts for the heterogeneous catalysis of CO2 and H2, a viable sea-based synthetic fuel process could be developed.

The NRL effort made significant advances developing carbon capture technologies in the laboratory. In the summer of 2009 a standard commercially available chlorine dioxide cell and an electro-deionization cell were modified to function as electrochemical acidification cells. Using the novel modified cells both dissolved and bound CO2 were recovered from seawater by re-equilibrating carbonate and bicarbonate to CO2 gas at a seawater pH below 6. In addition to CO2, the cells produced H2 at the cathode as a by-product.

Note that the oceans offer a huge reserve of raw materials for fuel production.

The completed studies of 2009 assessed the effects of the acidification cell configuration, seawater composition, flow rate, and current on seawater pH levels. The data were used to determine the feasibility of this approach for efficiently extracting large quantities of CO2 from seawater. From these feasibility studies NRL successfully scaled-up and integrated the carbon capture technology into an independent skid, or “lab on a pallet’ so to speak, called a “carbon capture skid” to process larger volumes of seawater and evaluate the overall system design and efficiencies.

The carbon capture skid’s major component is a three-chambered electrochemical acidification cell. The cell uses small quantities of electricity to exchange hydrogen ions produced at the anode with sodium ions in the seawater stream. As a result, the seawater is acidified. At the cathode, water is reduced to H2 gas and sodium hydroxide (NaOH) is formed. This basic solution may be re-combined with the acidified seawater to return the seawater to its original pH with no additional chemicals. Current and continuing research using the carbon capture skid demonstrates the continuous efficient production of H2 and the recovery of up to 92% of the CO2 from seawater.

The carbon capture skid has been tested using seawater from the Gulf of Mexico to simulate conditions that will be encountered in actual open ocean processing.

The NRL group is working now on process optimization and scale-up.  Initial studies predict that jet fuel from seawater would cost in the range of $3 to $6 per gallon to produce.

Willauer points out, “With such a process, the Navy could avoid the uncertainties inherent in procuring fuel from foreign sources and/or maintaining long supply lines.”  During the government’s fiscal year 2011, the U.S. Navy Military Sea Lift Command, the primary supplier of fuel and oil to the U.S. Navy fleet, delivered nearly 600 million gallons of fuel to Navy vessels underway, operating 15 fleet replenishment ships around the globe.

The Navy’s fuel supply system works at sea, while underway and is a costly endeavor in terms of logistics, time, fiscal constraints and threats to national security and the sailors at sea.

It’s a brilliantly insightful use of the environment.  Moreover the technology will help clean the seawater of an overcharge of CO2 and that is actually a recycling of fossil fuel additions to the environment.

Entrepreneurs are going to realize the Navy’s work could be an industrial boon to fuel production as well as shorten the carbon cycle.  While the Navy thinks $3 to $6 for production cost, the private sector would very likely drive that cost far further down.

It’s not hard to imagine that in a few years most of the oil business might simply be at sea, harvesting CO2 and H2, making petroleum products from a recycling of the CO2 from the past use of fossil fuels.

Many may complain that the military is a waste, poor policy, or other notions that fly in the face of human nature.  But in the past few decades the U.S. military, filled with volunteers, can make significant contributions, and now perhaps solve what has been thought to be an intractable problem.

Fast Data

Fast Data hits the Big Data Fast Lane

Courtesy of Tony Baer’s OnStrategies blog.

 

Fast Data, used in large enterprises for highly specialized needs, has become more affordable and available to the mainstream. Just when corporations absolutely need it.

Of the 3 “V’s” of Big Data – volume, variety, velocity (we’d add “Value as the 4th V) – velocity has been the unsung ‘V.’ With the spotlight on Hadoop, the popular image of Big Data is large petabyte data stores of unstructured data (which are the first two V’s). While Big Data has been thought of as large stores of data at rest, it can also be about data in motion.

“Fast Data” refers to processes that require lower latencies than would otherwise be possible with optimized disk-based storage. Fast Data is not a single technology, but a spectrum of approaches that process data that might or might not be stored. It could encompass event processing, in-memory databases, or hybrid data stores that optimize cache with disk.

Fast Data is nothing new, but because of the cost of memory, was traditionally restricted to a handful of extremely high-value use cases. For instance:

  • Wall Street firms routinely analyze live market feeds, and in many cases, run sophisticated complex event processing (CEP) programs on event streams (often in real time) to make operational decisions.
  • Telcos have handled such data in optimizing network operations while leading logistics firms have used CEP to optimize their transport networks.
  • In-memory databases, used as a faster alternative to disk, have similarly been around for well over a, having been employed for program stock trading, telecommunications equipment, airline schedulers, and large destination online retail (e.g., Amazon).

Hybrid in-memory and disk have also become commonplace, especially amongst data warehousing systems (e.g., “>Teradata, Kognitio), and more recently among the emergent class of advanced SQL analytic platforms (e.g., Greenplum, Teradata Aster, IBM Netezza, HP Vertica, ParAccel) that employ smart caching in conjunction with a number of other bells and whistles to juice SQL performance and scaling (e.g., flatter indexes, extensive use of various data compression schemes, columnar table structures, etc.). Many of these systems are in turn packaged as appliances that come with specially tuned, high-performance e backplanes and direct attached disk.

Finally, caching is hardly unknown to the database world. Hot spots of data that are frequently accessed are often placed in cache, as are snapshots of database configurations that are often stored to support restore processes, and so on

So what’s changed?

The usual factors: the same data explosion that created the urgency for Big Data is also generating demand for making the data instantly actionable. Bandwidth, commodity hardware, and of course, declining memory prices, are further forcing the issue: Fast Data is no longer limited to specialized, premium use cases for enterprises with infinite budgets.

Not surprisingly, pure in-memory databases are now going mainstream: Oracle and SAP are choosing in-memory as one of the next places where they are establishing competitive stakes: SAP HANA vs. Oracle Exalytics. Both Oracle and SAP for now are targeting analytic processing, including OLAP(raise the size limits on OLAP cubes) and more complex, multi-stage analytic problems that traditionally would have required batch runs (such as multivariate pricing) or would not have been run at all (too complex, too much delay). More to the point, SAP is counting on HANA as a major pillar of its stretch goal to become the #2 database player by 2015, which means expanding HANA’s target to include next generation enterprise transactional applications with embedded analytics.

Potential use cases for Fast Data could encompass:

  • A homeland security agency monitoring the borders requires the ability to parse, decipher, and act on complex occurrences in real time to prevent suspicious people from entering the country
  • Capital markets trading firms require real-time analytics and sophisticated event processing to conduct algorithmic or high-frequency trades
  • Entities managing smart infrastructure must digest torrents of sensory data to make real-time decisions that optimize use transportation or public utility infrastructure
  • B2B consumer products firms monitoring social networks may require real-time response to understand sudden swings in customer sentiment

For such organizations, Fast Data is no longer a luxury, but a necessity.

More specialized use cases are similarly emerging now that the core in-memory technology is becoming more affordable. YarcData, a startup from venerable HPC player Cray Computer, is targeting graph data, which represents data with many-to-many relationships. Graph computing is extremely process-intensive, and as such, has traditionally been run in batch when involving Internet-size sets of data. YarcData adopts a classic hybrid approach that pipelines computations in memory, but persisting data to disk. YarcData is the tip of the iceberg – we expect to see more specialized applications that utilize hybrid caching that combine speed with scale.

But don’t forget, memory’s not the new disk

The movement – or tiering – of data to faster or slower media is also nothing new. What is new is that data in memory may not longer be such a transient thing, and if memory is relied upon for in situ processing of data in motion or rapid processing of data at rest, memory cannot simply be treated as the new disk. Excluding specialized forms of memory such as ROM, by nature anything that’s solid state is volatile: there goes your power… and there goes your data. Not surprisingly, in-memory systems such as HANA still replicate to disk to reduce volatility. For conventional disk data stores that increasingly leverage memory, Storage Switzerland’s George Crump makes the case that caching practices must become smarter to avoid misses (where data gets mistakenly swapped out). There are also balance of system considerations: memory may be fast, but is its processing speed well matched with processor? Maybe solid state overcomes I/O issues associated with disk, but may still be vulnerable to coupling issues if processors get bottlenecked or MapReduce jobs are not optimized.

Declining memory process are putting Fast Data on the fast lane to mainstream. But as the technology is now becoming affordable, we’re still early in the learning curve for how to design for it.

Brain-Machine Interface – Avatars Of The Future, A Reality

Brain-machine interface lets monkeys move two virtual arms with minds: study

Xinhua | 2013-11-7 | Global Times

 

US researchers said Wednesday that monkeys in a lab have learned to control the movement of both arms on an avatar using just their brain activity.

The findings, published in the US journal Science Translational Medicine, advanced efforts to develop bilateral movement in brain-controlled prosthetic devices for severely paralyzed patients, said researchers at Duke University, based in Durham, the state of North Carolina.

To enable the monkeys to control two virtual arms, the researchers recorded nearly 500 neurons from multiple areas in both cerebral hemispheres of the animals’ brains, the largest number of neurons recorded and reported to date.

Millions of people worldwide suffer from sensory and motor deficits caused by spinal cord injuries. Researchers are working to develop tools to help restore their mobility and sense of touch by connecting their brains with assistive devices.

The brain-machine interface approach holds promise for reaching this goal. However, until now brain-machine interfaces could only control a single prosthetic limb.

“Bimanual movements in our daily activities — from typing on a keyboard to opening a can — are critically important,” senior author Miguel Nicolelis, professor of neurobiology at Duke University School of Medicine said in a statement. “Future brain- machine interfaces aimed at restoring mobility in humans will have to incorporate multiple limbs to greatly benefit severely paralyzed patients.”

Nicolelis and his colleagues studied large-scale cortical recordings to see if they could provide sufficient signals to brain-machine interfaces to accurately control bimanual movements.

The monkeys were trained in a virtual environment within which they viewed realistic avatar arms on a screen and were encouraged to place their virtual hands on specific targets in a bimanual motor task. The monkeys first learned to control the avatar arms using a pair of joysticks, but were able to learn to use just their brain activity to move both avatar arms without moving their own arms.

As the animals’ performance in controlling both virtual arms improved over time, the researchers observed widespread plasticity in cortical areas of their brains. These results suggested that the monkeys’ brains may incorporate the avatar arms into their internal image of their bodies, a finding recently reported by the same researchers in the journal Proceedings of the National Academy of Sciences.

The researchers also found that cortical regions showed specific patterns of neuronal electrical activity during bimanual movements that differed from the neuronal activity produced for moving each arm separately.

The study suggested that very large neuronal ensembles — not single neurons — define the underlying physiological unit of normal motor functions, the researchers said, adding that small neuronal samples of the cortex may be insufficient to control complex motor behaviors using a brain-machine interface.

“When we looked at the properties of individual neurons, or of whole populations of cortical cells, we noticed that simply summing up the neuronal activity correlated to movements of the right and left arms did not allow us to predict what the same individual neurons or neuronal populations would do when both arms were engaged together in a bimanual task,” Nicolelis said. “This finding points to an emergent brain property — a non-linear summation — for when both hands are engaged at once.”

Mutant Gene Discovery

Mutant gene discovery will help research

Xinhua | 2013-11-7 | Global Times

 

Chinese doctors have discovered and registered a new mutant gene for alpha-thalassemia, first of its kind worldwide, an advance that enriches the gene database to assist researches into cures for genetic disease.

Li Youqiong and colleagues from the People’s Hospital of Guangxi Zhuang Autonomous Region, discovered this gene, a 21.9, after a series of experiments on a carrier of the hereditary disease in 2011.

Thalassemia is a disease where the carrier is missing or has malfunctioning genes responsible for making hemoglobin, the blood protein that helps to carry oxygen around the body.

The hemoglobin molecule has subunits commonly referred to as alpha and beta.

The mutant gene was identified by the end of 2012 before it was added to the GenBank database in the US-based National Center for Biotechnology Information(NCBI) and then disclosed to public on Oct.1 2013, according to Li.

There is no effective cure for alpha-thalassemia, and the discovery of the new mutation will help prevention and research into the disease while preparing theoretical basis for future gene therapy.

There are three main genetic sequence databases worldwide, which comprises the DNA Data bank of Japan(DDBJ),the European Molecular Biology Laboratory(EMBL) and GenBank at NCBI. These three organizations exchange data on a daily basis.

7 Technology Trends for 2014

The Top 7 Technology Trends That Will Dominate 2014

Jayson DeMers

Strap yourself in, it’s going to be a wild ride. In considering the changes we’ve seen in technology over the past year, I’m bracing myself for unprecedented growth when it comes to anytime, anywhere, on-demand information and entertainment.

Based on the trends we’ve seen so far in 2013, I predict 2014 will see many fledgling technologies mature and grow beyond what we could have imagined just a few years ago.

So without further ado, here are my top 7 predictions for technology trends that will dominate 2014.

1. Consumers will come to expect Smart TV capabilities

With Smart TV shipments expected to reach 123 million in 2014 – up from about 84 million in 2012 – we are poised to see explosive growth in this industry.

In the midst of this growth, we will continue to see fierce competition between major players like Samsung, Panasonic, and LG. Prices will need to continue to drop, as more consumers crave, and even expect, the ability to use Netflix, Hulu, Amazon Instant Video and their web browser via their TV.

Of course, the development we’re all waiting for in 2014 is the release of Apple’s much anticipated iTV. It appears the iTV is now in the early development stage, and that Apple may be in the process of making a deal with Time Warner to facilitate programming on Apple devices.

The device is rumoured to include iCloud sync, the ability to control your iPhone, and ultra HD LCD panels. Keep an eye out for this release as early as summer 2014.

2. Smart watches will become ‘smarter’

Rather than having to pull out your smartphone or tablet for frequent email, text and social media updates, you’ll glance at your watch.

2014 is the year to keep an eye out for the Google watch. Rumor has it the device will integrate with Google Now, which aims to seamlessly provide relevant information when and where you want it (and before you’d asked for it).

We’ll see smart watches become even smarter, learning what news and updates are important to us, when we want to receive them, and responding more accurately to voice controls.

For smart watches to succeed, they’ll need to offer us something that our smart phone can’t; whether this means more intuitive notifications, or the ability to learn from our daily activities and behaviours (for instance, heart rate monitoring), it will be interesting to see.

3. Google Glass will still be in “wait and see” mode

While Google Glass hasn’t yet been released to the general public, we’ve heard enough about it to know it’s still very early days for this technology. With an estimated 60,000 units expected to sell in 2013, and a predicted several million in 2014, it’s still a long way from becoming a common household technology.

These augmented reality glasses allow you to access information like email and texts, take hands-free pictures and videos, effortlessly translate your voice, and even receive overlaid walking, cycling or driving directions, right within your field of vision.

It’s predicted that both Google Glass 2.0, and its companion, the Glass App Store, should be released to the general public sometime in 2014.

Be on the lookout for competition in this market, particularly from major players like Samsung. I predict we’ll see much of this competition aimed at niche markets like sports and healthcare.

4. Other applications and uses for Apple’s TouchID will emerge

The release of the iPhone 5S has, for the first time, made on-the-go fingerprint security a reality. The potential for Touch ID technology to really take off is, I believe, an inevitable reality. Touch ID, which uses a high-resolution camera to scan your fingerprint, allows convenient ultra-security for your iPhone.

Currently, the technology is limited; the only real uses are unlocking your iPhone, and making purchases in the App store. I predict that we’ll see this technology incorporated into other Apple products soon. I think we’ll even see TouchId integrated into MacBook products later this year or next.

I also predict TouchID, though not quite bug-free, will be used for other purposes, such as to securely integrate with home security systems, access password software, and even pay for groceries (more on that in an upcoming article).

5. Xbox One and PS4 will blur the lines between entertainment and video gaming

The new gaming consoles (Xbox One and PS4) will increasingly integrate social media-like connectivity between players. Players could have followers, work together to achieve in-game goals, and new technology will allow for equally-skilled players to compete.

The PS4, slated to be released November 15th, will track both the controller and the player’s face and movements for more intuitive play.

Apart from great gaming, these systems will allow for a far more integrative entertainment experience. For instance, rather than switching between TV, gaming, music and sports, you’ll be able to do two or even three activities side-by-side, or by easily switching back and forth.

6. 3D Printing will begin to revolutionize production

We’ve seen a huge rise in the popularity of 3D printing this year, coupled with a dramatic fall in pricing. The ability to easily create multi-layered products that are actually usable – well, that’s pretty amazing.

I’ll be watching for a movement towards simple products being produced close to home, and to greater customization given the ease of manufacturing. I think it’s inevitable that manufacturing in countries such as China will become less appealing and lucrative for businesses given the high costs of shipping and managing overseas contracts.

I don’t expect these changes to reach their full effect in 2014, however I believe businesses will be starting to consider how this will affect their production plans for 2015 and beyond.

7. The movement towards natural language search will make search more accurate and intuitive

There was a time when we used terms like “personal digital assistant” to describe a hand-held calendar. Oh, how times have changed.

With the emergence of intelligent personal assistants like Google Now and Apple’s Siri, the goal is to have information intuitively delivered to you, often before you even ask for it. The shift seems to be away from having to actively request data, and instead to have it passively delivered to your device.

Natural language search will continue to overtake keyword-based search, as seen by Google’s move towards longer, more natural searches in its recent release of Hummingbird, Google’s largest algorithm update thus far.

 

3-D Tools & Avionics Manufacturing

By Graham Warwick
Source: Aviation Week & Space Technology

Virtual reality has become a commonplace engineering tool for major aerospace manufacturers, where three-dimensional visualization systems are routinely used to aid design reviews.

But further down the supply chain, simulation environments into which designers can immerse themselves to navigate a structure or walk a cabin are too expensive—and unnecessary if what the company produces fits on a desktop, or in the hand of an engineer.

Avionics manufacturer Rockwell Collins decided to develop its own low-cost 3-D visualization system, initially to perform virtually what previously was done physically: to visually inspect new hardware designs to assess their manufacturability.

The company’s goal in developing the Virtual Product Model (VPM) was to find manufacturing problems earlier in the design cycle, when new avionics boxes are still on the computer screen and before expensive prototypes have been produced.

“3-D virtual reality has been used at the prime level for over a decade, and we recognize its power for communicating and understanding designs and the impact of designs,” says Jim Lorenz, manager of advanced industrial engineering. “Large-scale fully immersive systems are appropriate at the platform level, but at the box level, on a tabletop, their expense is outside what we could deal with.”

Rockwell Collins’s solution was to find commercial software that could be tailored to provide a low-cost way to take product data from its computer-aided design (CAD) system, convert it to 3-D and put it into a virtual environment “without specialist skills or vast expense,” says Kevin Fischer, manager of manufacturing technology pursuits.

Using 3-D glasses and a motion-capture system, an engineer can manipulate the virtual model of an avionics box, inspecting it from all angles to make sure it can be manufactured in the factory or repaired in the field. Several people can view the 3-D model collaboratively during a design review, or it can be sent to individual engineers and viewed in 2-D format on desktop workstations.

“We take the CAD model into the VPM and put it in a format that does not need the software to run. We send an executable file, the engineers open it, inspect the model and determine what its manufacturability is by looking at it,” Fischer says.

The basic requirement is to perform virtually—via 3-D models–the manufacturability assessments previously conducted manually using physical prototypes. And “there are some unique things the system can do,” he says. These include an “augmented reality” mode that allows the user to change the 3-D model’s scale “and go between the circuit cards to see things we can’t catch physically.”

In augmented reality, the user’s hand as represented in the virtual environment, its motion captured by cameras, can be varied in size from that of a large man to that of a small woman to help uncover potential accessibility problems.

The VPM system is now in day-to-day use with new designs. A “couple of hundred” designs have gone through the process and Rockwell Collins puts the return on its investment at 800% in terms of the number of hours required to fix manufacturability issues discovered virtually in the 3-D model versus physically in a hardware prototype.

Although the CAD data is reduced in resolution when it is converted to a 3-D model for visualization, “we have yet to run into a [manufacturability] problem [in the model] and there not turn out to be a correspondingly real problem [in the hardware],” says Lorenz.

Expanding the capability is next on the agenda. One direction is to take the now-manual assessment process and automate it by bringing in rules-based analysis software. “We are starting to think about how to take the capability to visually inspect a design and apply appropriate rules to get a level of automation where we find things we don’t catch by manual inspection,” says Fischer.

Another direction is to pull more data into the visualization environment for use during design reviews, “information such as cost at the piece-part level, so we can see the implications of design decisions,” says Lorenz. “We are also doing some work at the conceptual design level. We would like to use VPM two or three times during the design cycle, but we are not there yet.”

The company also is looking at using VPM as a basis for developing 3-D work instructions for use on the factory floor, and for the technical documents used by field service representatives to troubleshoot problems. “Their key interest is getting down to the circuit-card level, while [in manufacturing] we work with boxes,” says Fischer.

Rockwell Collins also would like to expand the VPM beyond mechanical CAD data. “We want to do electrical, et cetera, in the same environment by pulling together various types of models,” says Fischer. “Anything you can do in PowerPoint, this can do better. But we need to beef up the electrical CAD side of the equation.”

Next Generation Jammer

By Graham Warwick  graham.warwick@aviationweek.com
Source: AWIN First
July 08, 2013                       Credit: Boeing

Raytheon has been selected to develop the Next Generation Jammer (NGJ) pod to replace the ALQ-99 tactical jamming system now carried by U.S Navy Boeing EA-18G Growler electronic-attack aircraft.

The company has been awarded a $279.4 million contract for the 22-month technology development phase of the program. NGJ is planned to become operational in 2020, providing increased jamming agility and precision and expanded broadband capability for greater threat coverage.

Raytheon was one of four contractors involved in the 33-month technology maturation phase of the NGJ program. The others were BAE Systems, ITT Exelis and Northrop Grumman, but the Defense Department contract announcement says only three bids were received.

Under the TD phase, Raytheon will “design and build critical technologies that will be the foundational blocks of NGJ,” says Naval Air Systems Command. The complete system will be flight tested on the EA-18G in the follow-on, 54-month engineering and manufacturing development phase.

Raytheon confirms receipt of the award and says it offered “an innovative, next-generation solution that meets current customer requirements and potential future needs.” All the competitors based their designs for the NGJ pod on active, electronically scanned array jammer antennas.

Why the Cloud is Winning

Here are another 51 million reasons why the cloud is winning

Summary: Commodity cloud services are delivering savings that put prices charged by large systems integrators to shame, according to the UK’s tech chief.

By | July  4, 2013 — 08:36 GMT (01:36 PDT)

Faced with a £52m bill from a large IT vendor for hosting “a major programme” the UK government decided to turn to commodity cloud services.

The result? It picked up a comparable service from a smaller player for £942,000.

“In the world of the cloud the services I get from a major systems integrator and from a minor systems integrator are relatively comparable, given the security and ability to host is often specced out anyway,” UK government CTO Liam Maxwell told The Economist’s CIO Forum in London yesterday.

The UK government plans to use commodity cloud services to help free itself from the stranglehold of a small number of systems integrators that traditionally carried out about 80 percent of government IT work, and charged huge sums of money for doing so.

Departments are being encouraged to buy cloud services from the government-run CloudStore — an online catalogue of thousands of SaaS, PaaS and IaaS and specialist cloud services available to public sector bodies — which are sourced by Whitehall through its G-Cloud procurement framework.

The idea of the CloudStore is to provide a platform where it is as easy for small and medium sized businesses to sell to government as the large vendors. The government has simplified the accreditation process to become a supplier to government and the vendors selling through the store range from multi-national corporates to start-ups.

While more than 60 percent of the spend through the CloudStore has been with SMEs since it launched last year, larger deals through the store are still going to big companies, with IBM picking up a £1.2m deal with the Home Office in May.

Spend on G-Cloud services is growing rapidly, passing £25m in May, but is still tiny compared to an estimated annual public sector IT spend of £16bn. However this could pick up even more sharply as long-term contracts with large systems integrators expire.

“The majority of the large contracts finish by 2014-15, so there’s an enormous amount of change underway at the moment,” said Maxwell.

“We’re not going to replace, we’re going to base our services around user need, and in many cases that means not doing the same thing again.”

The UK’s Office of Fair Trading today called for suppliers and purchasers in the UK public sector to contact them with their experiences on how easy it is for smaller vendors to supply to government and barriers put in place by larger players to prevent government switching to competitors.

Earlier this year the government’s director of the G-Cloud programme Denise McDonagh said systems integrators are slashing what they charge Whitehall departments in an effort to stop them from switching to cloud services.

Maxwell has plenty of government IT horror stories of his own, telling the conference it historically cost government £723 to process each payment claim made by farmers to the Rural Payments Agency.

“It would be cheaper to rent a taxi put the cash in the taxi, drive the taxi to the farm and keep a manual record than it would have been the way the outsource contract worked,” he said.

 

Remotely Controlling Robots

Astronaut aboard the ISS successfully controls a robot on earth for the first time

By Nathan Ingraham  on July   3, 2013 04:35 pm  |  Email@NateIngraham

international space station

NASA has completed the first successful test in which an astronaut aboard the International Space Station was able to control a robot more than 400 miles away back on the surface of the Earth. According to Space.com, the June 17th test marks the first time astronauts were able to control a robot on Earth, an advancement that will hopefully pave the way for similar control over robots deployed on Mars or the moon. The simulated test consisted of astronaut Chris Cassidy controlling a K10 rover at the Ames Research Center in Moffett Field, CA; Cassidy successfully deployed a polymide-film antenna while dealing with simulated terrain via a real-time video feed.

“It was a great success… and the team was thrilled with how smoothly everything went,” said Jack Burns, director of the NASA Lunar Science Institute’s Lunar University Network for Astrophysics Research. The trial was a test for a potential deployment of radio antennas on the far side of the moon, a mission that would utilize the same sort of technology used in last month’s trial. But more test are needed before such a deployment — NASA says it’ll conduct follow-up test communications between the rover and the ISS in late July and early August.

Flash Drive

June 17, 2013

Object of Interest: The Flash Drive

Posted by

Corbis-42-21082844-580.jpg

When Daniel Ellsberg decided to copy the Pentagon Papers, in 1969, he secretly reproduced them, page by page, with a photocopier. The process of duplication was slow; every complete copy of the material spanned seven thousand pages. When Edward Snowden decided to leak details of surveillance programs conducted by the National Security Agency, he was able to simply slip hundreds of documents into his pocket; the government believes that Snowden secreted them away on a small device no bigger than a pinkie finger: a flash drive.

The flash drive’s compact size, ever-increasing storage capacity, and ability to interface with any computer that has a universal-serial-bus port—which is, essentially, every computer—makes it an ideal device for covertly copying data or uploading malicious software onto computer systems. They are, consequently, an ongoing security concern. The devices are reportedly banned from the N.S.A.’s facilities; a former N.S.A. official told the Los Angeles Times that “special permission” is required to use them. Even then, the official said, “people always look at you funny.” In the magazine, Seymour Hersh reported that an incident involving a USB drive resulted in some N.S.A. unit commanders ordering “all ports on the computers on their bases to be sealed with liquid cement.”

USB flash drives are perhaps the purest form of two distinct pieces of technology: flash memory and the universal serial bus. Flash memory was invented at Toshiba in the nineteen-eighties. According to Toshiba’s timeline, the NAND variant of flash memory, which is the kind now used for storage in myriad devices, like smartphones and flash drives, was invented in 1987. The technology, which stores data in memory cells, remained incredibly expensive for well over a decade, costing hundreds of dollars per megabyte in the early to mid-nineteen-nineties. The universal serial bus was developed in the mid-nineties by a coalition of technology companies to simplify connecting devices to computers through a single, standardized port. By the end of the decade, flash memory had become inexpensive enough to begin to make its way into consumer devices, while USB succeeded in becoming a truly universal computer interface.

The first patent for a “USB-based PC flash disk” was filed in April, 1999, by the Israeli company M-Systems (which no longer exists—it was acquired by SanDisk in 2006). Later that same year, I.B.M. filed an invention disclosure by one of its employees, Shimon Shmueli, who continues to claim that he invented the USB flash drive. Trek 2000 International, a Singaporean company, was the first to actually sell a USB flash drive, which it called the ThumbDrive, in early 2000. (It won the trademark for ThumbDrive, which has come to be a generic term for the devices, only a few years ago.) Later that year, I.B.M. was the first to sell the devices in the U.S. The drive, produced by M-Systems, was called the DiskOnKey. The first model held just eight megabytes. The timing was nonetheless fortuitous: 1.44-megabyte floppy disks had long been unable to cope with expanding file sizes, and even the most popular souped-up replacement, the Zip drive, failed to truly succeed it. Optical media, despite storing large amounts of data, remained relatively inconvenient; recording data was time consuming, re-recording it even more so.

Improved manufacturing technologies have simultaneously increased flash drives’ capacity while decreasing their cost. The most popular flash drive on Amazon stores thirty-two gigabytes and costs just twenty-five dollars, while a flash drive recently announced by Kingston can hold one terabyte of data—enough for thousands of hours of audio, or well over a hundred million pages of documents—and transfer that data at speeds of a hundred and sixty to two hundred and forty megabytes per second. Few things come to mind that store more information in less space—a black hole, for instance.

More critically, as convenience drives people to share more and more information across networks, rather than through meatspace—why back up data on a spare hard drive when you can store it in the cloud for cents on the gigabyte, or burn a movie to a disc for a friend when you can share it via Dropbox?—flash drives are a convenient means of transporting large quantities of information off the grid. (Getting that data onto the flash drive in the first place may be another matter, though.) Carrying a flash drive in your pocket on the subway does not produce network traffic or metadata that can later be analyzed.

Flash drives have even been used to create a new form of a dead drop in cities around the country: the drives are embedded into walls or other public spaces, and users simply plug their device into the exposed USB port to download or upload data. Though these dead drops are largely a kind of performance art, the intent is to allow people to anonymously share data without passing it over a network—a proposition that is only growing more rarefied.

It seems certain that there will be more Daniel Ellsbergs and Edward Snowdens, and almost as certain that flash drives will be a tool they use to secretly copy and abscond with the information they need—at least until something that is even more discreet, secure, and convenient arrives.