Saturday 28 February 2015

Google DeepMind AI learns to play games

Google Deep Mind
Add caption
Google has developed artificial intelligence software capable of learning to play video games just by watching them.
Google DeepMind, a London-based subsidiary, has trained an AI gamer to play 49 different video games from an Atari 2600, beating a professional human player’s top score in 23 of them. The software isn’t told the rules of the game – instead it uses an algorithm called a deep neural network to examine the state of the game and figure out which actions produce the highest total score.
“It really is the first algorithm that can match human performance across a wide range of challenging tasks,” says DeepMind co-founder Demis Hassabis.
Deep neural networks are often used for image recognition problems, but DeepMind combined theirs with another technique called reinforcement learning, which rewards the system for taking certain actions, just as a human player is rewarded with a higher score when playing a video game correctly.

Flex Charger charges up to five devices at a time


FlexCharger features wireless charging, two cables, a dock and a USB 3.0 port
Flex Charger features wireless charging, two cables, a dock and a USB 3.0 port
Charging devices is, for many people, just a part of their daily routine. We have smartphones, tablets, laptops, portable game consoles, and all kinds of other devices that need some juice. With that in mind, there's no shortage of devices that aim to make charging easier and more efficient. A new player in the game is the FlexCharger, a versatile device that can charge five things at one time – and it does it in style.
At its core, the FlexCharger is just a device that allows you to charge five things at once, and that's useful in and of itself, but it's the extra features and goodies that make this thing stand out from the crowded charger market. One useful feature doesn't even pertain to charging, and that's the Wi-Fi repeater, which lets users extend the range of their Wi-Fi network.
The device plugs directly into a wall outlet, and users jack their devices into it from there. There's a dock on top of the Flex Charger, along with a retractable 6.6-foot (2-m) cable, a short 3.9-inch (10-cm) tray cable, and wireless charging for devices that support it. The fifth charging options comes from the USB 3.0 port on the bottom, which allows users to plug in their own cables and charge anything that doesn't work in the default options. The front of the charger folds down, allowing users to run the cable through the opening, creating a second docking area strong enough to support the weight of a tablet.
The cables come with microUSB heads, with the option of putting Lightning adapters on top for iPhone and iPad owners. The back of the device has a little storage nook for holding said adapters, which will let users maintain the clean look.
All of the charging ports are rated at 8.4 amps, which promises to give it enough power to handle all kinds of devices while also charging quickly.
For travelers, there's exchangeable power adapters that allow FlexCharger to be used around the world.
The FlexCharger team is seeking funding on Indiegogo. It started with a modest US$10,000 goal, and it has passed that in a short period of time. There's a range of early options for backers to preorder, but the final price for the Flex Charger W/R comes in at $149. The team intends to ship devices to backers in October.

There are more affordable models for buyers not interested in the wireless charging or Wi-Fi repeater, as well. The Flex Charger W/R is the full model with all of the options, while the Flex Charger W drops the Wi-Fi repeater and the Flex Charger S drops that and wireless charging.

Friday 27 February 2015

NASA study looks to the ionosphere to improve GPS communications


NASA's new study, which focused in part on the pictured auroral region, will help allow sc...
NASA's new study, which focused in part on the pictured auroral region, will help allow scientists to predict when and where ionosphere irregularities will occur (NASA/JSC)
A new NASA study focusing on irregularities in Earth’s upper atmosphere may help scientists overcome disruptions in GPS communication. The findings provide an insight into the causes of the disruptive regions, and represent the first time that such observations have been made from space.
The ionosphere is a barrier of charged ions and electrons, collectively known as plasma, produced by a combination of impacting particles and solar radiation. When signals pass through the barrier, they sometimes come into contact with irregularities that distort the signal, leading to less accurate data.
The NASA observations, carried out by the Canadian Space Agency’sCascade Smallsat and Ionospheric Polar Explorer (CASSIOPE) satellite, focused on the Northern Hemisphere. They compared turbulence in the auroral regions – narrow, oval-shaped areas outside the polar caps that are bombarded with particles from the magnetosphere – with that observed at higher latitudes, above the Arctic polar cap.
It was found that irregularities tend to be larger in the auroral region – where they were measured to be between 1 and 40 km (0.62 to 25 miles) – than at higher latitudes, where they measured between 1 and 8 km (0.62 to 5 miles).
The study surmised that the variation between the two regions can be attributed to outside factors, with the auroral regions being exposed to energetic particles from the magnetosphere, while the polar cap region is affected by solar wind particles and electric fields in interplanetary space. This is important information in understanding and mitigating the effects of the irregularities.
The Canadian Space Agency's Smallsat and IOnospheric Polar Explorer (CASSIOPE) was used to...
Given the issues they cause – from the distortion of radio telescope imagery to disruption in aircraft communications – obtaining a greater understanding of the irregularities is an important endeavor, and will help researchers to predict when and where they will occur.
One example of the usefulness of such predictive abilities relates to NASA’s Deep Space Network (DNS), which monitors the positions of spacecraft from Earth. The system is routinely affected by the ionosphere, but this could be mitigated by the findings, with the team able to measure the delay in GPS signals caused by the disruptions in ionosphere, relaying the information back to the DNS team.
"By understanding the magnitude of the interference, spacecraft navigators can subtract the distortion from the ionosphere to get more accurate spacecraft locations,” said JPL supervisor Anthony Mannucci.
Source: NASA

Tuesday 24 February 2015

DARPA wants machines to have better communication skills


DARPA's Communicating with Computers (CwC) program is aimed at improving human/machine com...
DARPA's Communicating with Computers (CwC) program is aimed at improving human/machine communications (Image: Shutterstock)
DARPA’s new initiative, known as the Communicating with Computers (CwC) program, aims to improve the ability of machines to communicate effectively with their human counterparts. The agency has two initial experiments planned, focusing on the somewhat differing fields of improved conversational skills and better cancer detection.
It’s difficult to miss DARPA’s intent to create more technologically advanced war machines, with initiatives such as the Ground X-Vehicles Technologies program aiming to make smarter, more agile vehicles. However, the agency is also aware of the importance of making machines communicate better with their human overlords, and that’s where the CwC program steps in.
Two-way communication with machines is a significantly more difficult proposition than it might first seem. A simple conversation between two people involves constant assimilation and contextual understanding of information – a process that’s second nature to humans, but represents a huge challenge for machines.
DARPA program manager Paul Cohen commented on this, stating, "Human communication feels so natural that we don’t notice how much mental work it requires. But try to communicate while you’re doing something else – the high account rate among people who text while driving says it all – and you’ll quickly realize how demanding it is."
The goal of the CwC program is to develop computers that think more like people, and are therefore better able to communicate as people do. The team will work to develop a system that’s capable of completing tasks that require effective communication, the first of which will be collaborative story-telling.
For the experiment, the two parties (one human, one machine) will be required to complete subsequent sentences to complete a story. This will require the machines to keep track of the ideas presented by its human counterparts, before creating their own ideas based upon established data – similar to a normal human conversation.
The second initial CwC task approaches the same problem from an altogether different direction, building computer-based models of the molecular processes that cause cells to become cancerous. While machines are better at reading large quantities of data, their ability to autonomously process said information falls short. The project will tackle this, aiming to develop a system that’s better able to judge the biological plausibility of proposed molecular models.
The CwC program is very much in its infancy, with the above being just the first of many experiments aimed squarely at the goal of improving machines’ communication skills.

Sunday 22 February 2015

Now you can use a key to get into your Google account


Google's new USB Security Key provides a secure and convenient means of two-factor authent...
Google's new USB Security Key provides a secure and convenient means of two-factor authentication
There's an increasing recognition that passwords alone are not going be an adequate form of online security in the future. Two-factor authentication can vastly improve security, by simply introducing a second means of verification alongside a password. Google's new USB Security Key does just that.
There are various possible alternatives to using passwords or passwords alone for security. Google already offers a number of different two-step methods. Users can be sent codes via text message or phone call to input in addition to their password, they can generate a code via a mobile app, use back-up one-time-use codes or register a regularly-used computer or device as a second means of verification.
Google says that the Security Key pairs with its Chrome browser to offer even stronger security than its existing methods. It is also more convenient. Users simply insert the key into a USB port on their computer and press a button on it when prompted.
In addition to providing a second means of authentication, the key also verifies that the site requesting the password is actually a Google site and not a fake. As it is a USB key, the device is highly portable and avoids the need to rely on receiving codes or even having mobile connectivity available.
The Security Key uses the FIDO Alliance's open Universal 2nd Factor (U2F) protocol, which utilizes a standard public key cryptography approach. FIDO U2F will work with other websites as well as Google's and the company says that, in the interests of standardization, it hopes other browsers will add FIDO U2F support.

Cardless Cash Access lets users withdraw money from ATMs using their phone


Users of the system get a QR code sent to their phone, which they scan at the ATM
Users of the system get a QR code sent to their phone, which they scan at the ATM..any time,any where without ATM cards
Although not all of us may think of ATM use as something that needs to be sped up, banking technology company FIS has developed a system that is claimed to streamline the cash-getting process. Known as Cardless Cash Access, it allows people to get money from ATMs within seconds, using nothing but their smartphone.
To use Cardless Cash Access, a client's identity is first verified using an app on their phone – this can be done anywhere, up to 24 hours before they plan on getting the money. They then select the account, the amount that they wish to withdraw, and the ATM location at which they wish to pick the cash up.
When they get to that location, the money will be there waiting for them. To get the machine to dispense it, they just use the ATM to scan a one-time-use QR code that has been sent to their phone's screen. An electronic receipt is subsequently sent to their phone, eliminating the need for a paper version.
According to FIS, the system should decrease incidences of card skimming and fraud, plus it also ought to shorten wait times at bank machines. People worried about getting robbed while using ATMs will probably also appreciate spending as little time at them as possible.
Users' account information is stored in a secure cloud-based server, so even if they lose their phone, no one else will be able to access that information without their password.
The system has been already been utilized in pilot projects in three US cities, and FIS recently announced that three more such projects are planned to take place using City National Bank ATMs in Los Angeles, New York City and San Francisco.

Saturday 21 February 2015

Lapdeck laptop stand lets you escape the desk

Lapdeck lets you use your laptop in bed while retaining good posture

Lapdeck lets do well for easy to using laptops in their tasks
A large number of people spend a lot of time using their laptops at home, whether to work or for entertainment. Every single one of these people will likely understand the desire to combine the comfort associated with sitting on the sofa and the good posture and productivity levels associated with sitting at a desk. Lapdeck allows you to do both, while incorporating a couple of other notable features.
While there are already plenty of lap desks on the market, Lapdeck is a collapsible desk made from a single sheet of corrugated fiberboard. As with the others, it's designed to lift the weight and heat of your laptop off of your legs, enabling you to sit up straighter than you would when trying to balance the laptop on your knees. However, Lapdeck can also be folded flat for transport, and be recycled when it reaches the end of its life.
While Lapdeck is made from cardboard, and consequently weighs in at just 12 oz (340 g), it's strong enough to support devices weighing up to 8 lb (11.3 kg). This is more than enough to support any laptop up to 18 inches (45.7 cm) in size.
Lapdeck can be used anywhere at any time, being assembled and disassembled in seconds
Lapdeck is reportedly easy to put together and can be assembled and disassembled multiple times with no problem. When folded flat it will fit inside a standard #7 padded envelope, making it perfect for travelers. Its inventors claim Lapdeck will last between 4-12 months before needing to be replaced.
Such a short lifespan would be unacceptable if it wasn't for the Lapdeck having such a low asking price, with a pledge of US$10 securing one on Kickstarter (with $2 shipping inside the US, $14 everywhere else). The money raised by the campaign will pay for the tooling, production and shipment of Lapdeck, assuming all goes according to plan.
The video below shows Lapdeck being used in a variety of situations, as well as being assembled and disassembled.

Friday 20 February 2015

Ultra security claimed for IoT applications


Atmel says its latest CryptoAuthentication product – the ATECC508A – is the first to integrate the Elliptic Curve Diffie–Hellman (ECDH) security protocol, an ultra secure method to provide key agreement for encryption/decryption. The part, which also features Elliptic Curve Digital Signature Algorithm (ECDSA) authentication, is targeted at IoT applications. 
Atmel says its latest CryptoAuthentication product – the ATECC508A – is the first to integrate the Elliptic Curve Diffie–Hellman (ECDH) security protocol, an ultra secure method to provide key agreement for encryption/decryption. The part, which also features Elliptic Curve Digital Signature Algorithm (ECDSA) authentication, is targeted at IoT applications.
The ATECC508A is the second device in Atmel's CryptoAuthentication portfolio to offer advanced Elliptic Curve Cryptography (ECC) capabilities. It is compatible with any MCU or MPU and requires only a single general purpose I/O over a wide voltage range.
Rob Valiton, general manager of Atmel's Automotive, Aerospace and Memory business units, said: "It is the first device of its kind to apply hardware based key storage to provide the full complement of security capabilities; specifically confidentiality, data integrity and authentication. We are excited to continue bringing ultra secure crypto element solutions to a wide range of applications."
With a guaranteed 72bit unique ID, the device includes a 10.5kbit EEPROM for secret and private keys and is available in UDFN, SOIC, and three lead contact packages.
- See more at: http://www.newelectronics.co.uk/electronics-news/ultra-security-claimed-for-iot-applications/73972/#sthash.hL53oMbe

Monday 16 February 2015

Buhel SoundGlasses let you take calls, hands- and earphone-free demo video


Zorloo crams DAC and amp into Z:ero earphones inline control unit


The Zorloo team says that it's managed to integrate a headphone amp and DAC into the 8 x 3...
The Zorloo team says that it's managed to integrate a headphone amp and DAC into the 8 x 34 mm PCB of the inline playback and volume controller
Though certainly convenient, listening to music through earphones plugged into a smartphone is not always a satisfying experience. Some music lovers, like myself, prefer to carry around a dedicated high quality audio player, while others who like to groove on the move might give the source audio a welcome boost by using a mini middleman like Cambridge Audio'sDacMagic XS. Either way, higher quality comes at the expense of increased pocket bulk. The folks at Hong Kong-based Zorloo claim that they've managed to shrink down a digital-to-analog converter, headphone amp and control board to dimensions small enough for integration into the inline playback/volume controller of the company's upcoming Z:ero earphones.
Rather than plug the Z:eros into a smartphone's 3.5 mm audio jack and rely on the mobile device's own amp to deliver the tunes, these earphones grab the source audio direct from the micro-USB port. This essentially takes the audio processing strain away from the smartphone and insodoing is said to result in the delivery of higher quality sounds to the listener.
The Zorloo team says that it's also managed to integrate a headphone amp and DAC into the 8 x 34 mm PCB of the inline playback and volume controller. This is reported to boost source audio output power to a max of 27 mW at 32 ohms, more than double that of many leading modern smartphones, and "frees up any deficiency in the mobile audio implementation and re-create high fidelity music to your ears."
The integrated DAC/amp is reported to boost source audio output power to a max of 27 mW at...
The Z:ero earphones feature neodymium drivers housed in aluminum casing coated in glossy red or gold, promising responsive and vibrant audio, 20 Hz - 20 kHz frequency response and a total harmonic distortion of 0.02 percent.
The development team advises that only a limited number of smartphones have been tested for compatibility at present, but the testing program continues apace. As such, the earphones are currently only guaranteed to work with seven smartphones from Samsung, three from Sony, two from LG and one from Google. Though the makers say that compatibility can be extended to other USB OTG-sporting models by downloading and installing certain apps from the Play Store.
The Zorloo team has hit crowdfunding platform Indiegogo to bring the DAC-packing earphones to market. The US$16 super-cheap early bird special has already gone, so backers will now need to stump up at least $25 to join the Z:ero digital revolution. If all goes to plan, delivery is expected to start this coming April.
If you're an iOS mobile user and want in on the Z:ero action, the team says that, thanks to a stretch goal being achieved, an iPhone-friendly (Lightning) version is on its way (though this will be subject to a separate crowdfunding campaign and not an extra reward for this Indiegogo outing).

Buhel SoundGlasses let you take calls, hands- and earphone-free


Buhel's SG05 SoundGlasses relay calls using bone conduction technology
Buhel's SG05 SoundGlasses relay calls using bone conduction technology
There are already plenty of ways of taking hands-free phone calls, although most of those involve wearing some sort of earpiece. Not everyone enjoys having something continuously stuck in their ear, however, plus such devices lessen the user's ability to hear other sounds through that ear. Buhel's SG05 SoundGlasses take a different approach. They relay sound to the user via bone conduction, leaving their ears open to hear the world around them.
SoundGlasses communicate with the user's iOS, Android or Windows smartphone via Bluetooth 4.0. Phone call audio (or music from the mobile device's library, or from a Bluetooth-enabled MP3 player) is played back through two transducers, one located in each arm of the glasses. As withother bone conduction devices, the vibrations from those transducers travel through the bone in the sides of the user's skull to their inner ear, where they're heard as sound.
Not only does this leave the ear canal open to hear other sounds such as oncoming traffic, but it also allows for the use of hearing protection in overly-noisy environments.
Users hold up their end of the conversation using a bidirectional noise-canceling mic in t...
Users hold up their end of the conversation using a bidirectional noise-canceling mic in the bridge of the sunglasses, plus a multi-function button can be utilized to place and end calls, or to activate Siri/Cortana. One 3-hour charge of the glasses' lithium-ion battery should be good for about three hours of talk time, or around 300 on standby.
SoundGlasses come with multiple interchangeable Category 3 UV-certified lenses, along with an adapter for allowing users to mount their own prescription lenses. Buhel's parent company Atellani is currently raising production funds for the glasses, on Kickstarter. A pledge of US$160 will get you a set, assuming all goes according to plan. The estimated retail price is somewhere over $270.
Buhel, incidentally, already offers ski goggles with similar functionality.

LG's new Android Wear smartwatch, the Watch Urbane, has an all-metal body


LG's G Watch R was one of the more fashionable-looking smartwatches of 2014, but the LG Wa...
LG's G Watch R was one of the more fashionable-looking smartwatches of 2015
In the long-term, Android Wear isn't likely to be about just a small handful of watches. Fashion and individuality often go hand-in-hand, and now we're starting to see some of the early Android Wear watch-makers reflect that, making different smartwatches for different styles. The latest? LG's all-metal take on the G Watch R, the LG Watch Urbane.
The LG Watch Urbane (no "G" this time around) takes the same 1.3-in P-OLED display we saw in the G Watch R, and puts it on a stainless steel body. While it carries the same basic design language as its cousin, the LG Watch Urbane loses the diver watch-inspired dial in favor of a cleaner aesthetic.
LG will offer the Watch Urbane in gold and silver models, both with leather bands
Curiously, LG describes the Watch Urbane as having a thinner profile than the G Watch R, but its specs suggest otherwise. The press release lists the new model at 10.9 mm (0.43-in) thick, while our records have the G Watch R coming in at 9.7 mm (0.38-in) thick. Typo, perhaps?
(updateLG's press release also mentions "narrower bezels," so we're guessing "thinner profile" is, somewhat misleadingly, referring to that)
The company is boasting that the new model is better suited for fashion, and can be worn by either men or women (the G Watch R was quite the masculine-looking gizmo). The new model ships with a leather band, but, like many smartwatches, that can be swapped for a standard 22 mm band.
It still has a fully round screen, measuring 1.3 inches
Of course it runs Android Wear, and everything else appears to be identical to the G Watch R on the inside. Same screen, same Snapdragon 400 processor and same battery capacity. Ditto for its heart rate sensor. It could have just as easily been called "LG G Watch R: Stainless Steel Edition."
No word yet on pricing, but we wouldn't be shocked if the all-metal watch rings up for a bit more than its predecessor's US$300 price tag. LG will be showing off the fashionable Urbane at Mobile World Congress early next month, where Gizmag will be on the ground.

Telescopic contact lenses zoom in and out with the wink of an eye


Prototype telescopic contact lenses may one day help sufferers of age-related macular dege...
Prototype telescopic contact lenses may one day help sufferers of age-related macular degeneration 
Researchers from the Ecole Polytechnique Federale de Lausanne (EPFL) in Switzerland have developed contact lenses that have tiny telescopic lenses built in to boost vision. Controlled by smart glasses that react in response to the winking of an eye, the device allows the wearer to zoom in on objects by providing magnification up to 2.8 times that of unaided human eyesight.
Originally created under the auspices of Pentagon’s main research arm DARPA, where the lenses may also serve in enhancing future soldier's vision capabilities in the field, this development is an update to a model first released in 2013 and fine-tuned since then. This latest prototype was unveiled by Eric Tremblay from EPFL at the American Association for the Advancement of Science (AAAS) annual meeting.
In its latest form, the contact lenses are also now being promoted as a possible visual aid for those in the civilian arena who suffer from Age-related Macular Degeneration (AMD). AMD is a crippling eye disease that mainly affects sufferers over the age of 55 where the deterioration of their macula (or, more precisely, the macula lutea – an oval-shaped pigmented area near the middle of the retina) leads to a loss of sight in the center of the eye.
"We think these lenses hold a lot of promise for low vision and age‐related macular degeneration," said Tremblay. "It’s very important and hard to strike a balance between function and the social costs of wearing any kind of bulky visual device. There is a strong need for something more integrated, and a contact lens is an attractive direction. At this point this is still research, but we are hopeful it will eventually become a real option for people with AMD."
Smart glasses control the telescoping effect by registering the user's winks (Photo: Eric ...
Just 1.55 mm (0.06 in) thick, the new prototype contact lens contains an exceptionally thin, reflective, two-part magnifying region which is turned on and off in response to the defined movement of the wearer's eyelids. Programmed to be smart enough to differentiate between a longer, deliberate wink and the normal blink of an eye, the glasses that accompany the lenses allow the wearer to wink with the right eye to zoom in, and wink with the left eye to return to standard vision.
"Small mirrors within bounce light around, expanding the perceived size of objects and magnifying the view, so it's like looking through low magnification binoculars," said the researchers.
In detail, the glasses actually achieve magnification control by selecting the polarization of the light that reaches two different apertures contained in the lens. As the lens permits only light polarized in one direction through the standard-vision aperture and only in another way in the magnifying aperture, the user's eye sees only the image where the polarization of the glasses and the polarization of the contact lens aperture are the same.
Unlike ordinary soft-lens contacts, the telescoping lenses are created using larger, rigid scleral lenses – lenses that are placed on the sclera; the white of the eye – from a number of precision-machined plastic components, tiny aluminum mirrors, and thin polarizing films, all held together with biologically safe adhesives. To provide for the constant stream of oxygen required by the eye, the researchers improved the flow of oxygen by incorporating miniscule air channels approximately 0.1 mm (0.003 in) across the structure of the lens.
With further improvements over previous models, which required the user to tilt their head and look through the lens at just the right angle for the effect to be useful, the latest prototype also has the smarts to track eye movement and position the focus accordingly, thereby making them much easier to use and less tiring to wear.
The team working on the prototype lenses included participants from the University of California, San Diego, along with researchers at Paragon Vision Sciences, Innovega, Pacific Sciences and Engineering, and Rockwell Collins.
No announcement has been made regarding any pending commercial or military release of the technology.

ConferenceCam Connect could make web conferences on the go a little easier


The new Logitech ConferenceCam Connect aims to make teleconferencing easier
The new Logitech ConferenceCam Connect aims to make teleconferencing easier
Logitech is aiming to make it easier for businesses to have teleconferences, with the introduction of its new ConferenceCam Connect. In offices where conference rooms are not equipped with teleconferencing equipment, Logitech's new portable offering should come in handy.
The 1080p teleconferencing camera is designed to work with Mac OS X 10.7 and higher, and Windows 7 or 8.1. It will also work with some Google Chromebooks. As for video conferencing software, all of the major players are included such as Microsoft Lync 2013, Cisco Jabber and WebEx, and Skype.
As for the hardware itself, Logitech has included a ZEISS-certified 90-degree lens, which is designed to allow a small to medium conference room to appear in the shot. It also features two omni-directional microphones, which offer 360-degree voice communication with a 12-foot (3.7-m)-diameter range.
Miracast technology is also included to allow users to mirror their screen from a Windows 8.1 or Android 4.3 device. This will offer a quick way for users to show design ideas, or anything else during a meeting.
The last key feature of the ConferenceCam Connect is the remote, which allows users to take advantage of features like 4X zoom without having to get up and move to the camera. Since the device is designed to be moved from room to room, the remote docks into the back of the camera.

Wireless sensor alerts your smartphone as food begins to spoil


MIT researchers modified an NFC tag to function as a gas detecting sensor (Photo: Melanie ...
 Normally, these pulses induce an electric current in the tag's circuit that keeps it running.
Chemists at MIT have been working on a wireless, inexpensive sensor that, among other things, identifies spoiled food early by detecting gases in the air. It then shares its data with a smartphone, potentially alerting users to that soon-to-be moldy fruit in the bottom of the fridge.
“The beauty of these sensors is that they are really cheap,Professor of Chemistry at MIT. "You put them up, they sit there, and then you come around and read them. There’s no wiring involved. There’s no power. You can get quite imaginative as to what you might want to do with a technology like this.”
Swager has something of a history in developing gas-detecting sensors. In 2007, his amplified chemical sensors designed to detect vapors from explosives such as TNT saw him awarded the prestigious US$500,000 Lemelson-MIT Prize. In 2012 he produced ethylene sensors to gauge the ripeness of fruit, a tool that could help grocers arrange their stock to minimize waste and maximize sales of fresh produce. His latest creation could be seen as a culmination of these earlier achievements.
The new sensors are modified near-field communication (NFC) tags, which are often used as proximity sensors. The team punched a whole in the tag's electronic circuit and then replaced the missing link with carbon nanotubes designed to detect particular gases. The nanotubes were drawn on usingmechanical pencils, which were also developed in Swager's lab back in 2012.
These sensors require little power, which comes courtesy of short pulses of magnetic fields emitted by the smartphone used to read them. But in the modified tags, once the carbon nanotubes smell a targeted gas in the air the radio frequencies at which it receives these pulses is shifted. The sensor will only respond to the reading smartphone if the frequencies are unchanged, therefore indicating whether or not a targeted gas is present.
At the moment, each sensor is only able to detect one gas, and the smartphone must be held within 5 cm (2 in) to pick up a reading. The chemicals successfully sniffed out in testing include gaseous ammonia, hydrogen peroxide and cyclohexanone. While revealing rotting food is one potential use for the sensors, the minimal amount of energy required could see them deployed just about anywhere, possibly detecting everything from explosives to environmental pollutants to dangerous gas levels in manufacturing plants.
We have seen other devices emerge recently that are aimed at letting you know when food is unsafe to eat, such Peres and color-coded smart tags, though they don't quite promise the same versatility as MIT's newest creation.
The researchers have filed a patent for the technology and are now further exploring its potential applications. They are also seeking to integrate Bluetooth technology to expand its range beyond 5 cm (2 in).

Sunday 15 February 2015

Computer composers are changing how music is made


Could artificial intelligence render composers obsolete, or will it usher in a new era of ...
enjoy with music your mind
You've probably heard music composed by a computer algorithm, though you may not realize it. Artificial intelligence researchers have made huge gains in computational – or algorithmic – creativity over the past decade or two, and in music especially these advances are now filtering through to the real world. AI programs have produced albums in multiple genres. They've scored films and advertisements. And they've also generated mood music in games and smartphone apps. But what does computer-authored music sound like? Why do it? And how is it changing music creation? Join us, in this first entry in a series of features on creative AI, as we find out.
Semi-retired University of California Santa Cruz professor David Cope has been exploring the intersection of algorithms and creativity for over half a century, first on paper and then with computer. "It seemed even in my early teenage years perfectly logical to do creative things with algorithms rather than spend all the time writing out each note or paint this or write out this short story or develop this timeline word by word by word," he tells Gizmag.
Cope came to specialize in what he terms algorithmic composition (although, as you'll see later in this article series, that's far from all he's proficient at). He writes sets of instructions that enable computers to automatically generate complete orchestral compositions of any length in a matter of minutes using a kind of formal grammar and lexicon that he's spent decades refining.
His experiments in musical intelligence began in 1981 as the result of a composer's block in his more traditional music composition efforts, and he has since written around a dozen books and numerous journal articles on the subject. His algorithms have produced classical music ranging from single-instrument arrangements all the way up to full symphonies by modeling the styles of great composers like Bach and Mozart, and they have at times fooled people into believing that the works were written by human composers. You can listen to one of Cope's experiments below.
For Cope, one of the core benefits of AI composition is that it allows composers to experiment far more efficiently. Composers who lived prior to the advent of the personal computer, he says, had certain practicalities that limited them, namely that it might take months of work to turn an idea into a composition. If a piece is not in the composer's usual style, the risk that this composition may be terrible increases, because it will not be built on the techniques that they've used before and know will generally work. "With algorithms we can experiment in those ways to produce that piece in 15 minutes and we can know immediately whether it's going to work or not," Cope explains.
Algorithms that produce creative work have a significant benefit, then, in terms of time, energy, and money, as they reduce the wasted effort on failed ideas.
Shareware Windows app Easy Music Composer is one of dozens of tools now available that can...
Shareware Windows app Easy Music Composer is one of dozens of tools now available that can assist composers by generating partial or complete compositions from given variables
Tools such as Liquid NotesQuartet GeneratorEasy Music Composer, andMaestro Genesis are liberating to open-minded composers. They generate the musical equivalent of sentences and paragraphs with consummate ease, their benefit being that they do the hard part of translating an abstract idea or intention into notes, melodies, and harmonies.
Those with coding talent have it even better: Algorithms that a programmer can write in minutes could test any hypotheses a composer might have about a particular musical technique and produce virtual instrument sounds in dozens or hundreds of variations that give them a strong idea of how it works in practice. And Cope argues that all of this makes possible "an arena of creativity that could not have been imagined by someone even 50 years ago."
The compositions that computers create don't necessarily need any editing or polishing from humans. Some, such as those found in the album 0music, are fit to stand alone. 0music was composed by Melomics109, which is one of two music-focused AIs created by researchers at the University of Malaga. The other, which was introduced in 2010, three years before Melomics109, is Iamus. Both use a strategy modeled on biology to learn and evolve ever-better and more complex mechanisms for composing music. They began with very simple compositions and are now moving towards professional-caliber pieces.
"Before Iamus, most of the attempts to create music were oriented to mimic previous composers, by providing the computer with a set of scores/MIDI files," says lead researcher and Melomics founder Francisco Javier Vico. "Iamus was new in that ​it developed its own original style, creating its work from scratch, not mimicking any author."
It came as quite a shock, Vico notes. Iamus was alternately hailed as the 21st century's answer to Mozart and the producer of superficial, unmemorable, dry material devoid of soul. For many, though, it was seen as a sign that computers are rapidly catching up to humans in their capacity to write music. (You can decide for yourself by watching the Malaga Philharmonic Orchestra performing Iamus' Adsum.)
There's no reason to fear this progress, those in the field say. AI-composed music will not put professional composers, songwriters, or musicians out of business. Cope states the situation simply. "We have composers that are human and we have composers that are not human," he explains, noting that there's a human element in the non-human composers – the data being crunched to make the composition possible is chosen by humans if not created by them, and, deep learning aside, algorithms are written by humans.
Cope finds fears of computational creativity frustrating, as they belie humanity's arrogance. "We could do amazing things if we'd just give up a little bit of our ego and attempt to not pit us against our own creations – our own computers – but embrace the two of us together and in one way or another continue to grow," he says.
Vico makes a similar point. He compares the situation to the democratization of photography that has occurred in the last 15-20 years with the rise of digital cameras, easy-to-use editing software, and sharing over the Internet. "Computer-composers and browsing/editing tools can make a musicians out of anyone with a good ear and sensibility," he says.
Perhaps more exciting, though, is the potential of computer-generated, algorithmically-composed music to work in real time. Impromptu (see the video demonstration below) and various other audio plugins and tools help VJs and other performers to "live code" their performances, while OMaxlearns a musician's style and effectively improvises an accompaniment, always adapting to what's happening in the sound.
In the world of video games, real-time computer-generated music is a boon because its adaptability suits the unpredictable nature of play. Many games dynamically re-sequence their soundtracks or adjust the tempo and add or remove instrument layers as players come across enemies or move into different parts of the story or environment. A small few – including, most notably, Spore from SimCity and The Sims developer Maxis – use algorithmic music techniques to orchestrate players' adventures.
The composer Daniel Brown went a step further with this idea in 2012 with the Mezzo AI, which composes soundtracks in real time in a neo-Romantic style based on what characters in the game are doing and experiencing. You can see an example of what it produces below.
Other software applications can benefit from this sort of approach, too. There are apps emerging such as Melomics' @life that can compose personalized mood music on the fly. They learn from your behavior, react to your location, or respond to your physiological state.
"We have tested Melomics apps in pain perception with astonishing results," Vico says. He explains that music tuned to a particular situation can reduce pain, or the probability of feeling pain, by distracting patients. And apps are currently being tested that use computer-generated music to help with sleep disorders and anxiety.
"I would not try to sleep with Beethoven's 5th symphony (even if I love it), but a piece that captures your attention and slows down as you fall asleep – that works," Vico says. "We'll see completely new areas of music when sentient devices (like smartphones) control music delivery."
Tune in next week for the second entry in the series, as we turn our gaze to AI-created video games and AI-assisted video game development.