By AI Trends Staff
Neuroscientists at the University of California, San Francisco, are working on an AI program to turn thoughts into text. If it works, it has the potential to help many people with speech disabilities to easily communicate.
Less lofty but similarly ambitious, the company Pixevia is working on integrating computer vision and advanced algorithms into an advanced ecosystem around parking.
Both are examples of AI at the cutting edge. Here we review selected examples.
The UC San Francisco scientists recently published a paper on their work in the scientific journal Nature Neuroscience. “We exploit the conceptual similarity of the task of decoding speech from neural activity to the task of machine translation; that is, the algorithmic translation of text from one language to another,” they state in the paper.
The researchers used human trials to test out their hypothesis, according to an account in Popular Mechanics. They implanted electrodes into the brains of four participants with epilepsy to monitor their speech. Each person then reads sentences aloud from one of two datasets: a set of picture descriptions with 30 sentences and 125 unique words, with another dataset composed of 460 sentences and some 1,800 unique words.
Each participant read 50 sentences aloud multiple times. Lines included, “Tina Turner is a pop singer.” As each person spoke, the researchers monitored their brain activity. They then input data into a machine learning algorithm that could convert the brainwaves into strings of numbers that could then be converted back into words in another part of the system.
The system improved over time and in one case achieved 97% correct.
Automating Parking in Lots Up to 100 Spaces
For the parking lot applications, Pixevia is automating lots of up to 100 cars for the lot owners, so that no one has to be working at the lot, according to an account in techopedia. The company’s system combines license plate recognition and real-time information on space availability to customers and lot operators. The system can administer payments as well, by comparing license plates to payment information from car owners. No barriers are required.
In a related area, AI is being applied to try to improve traffic flow in cities. Researchers at the Department of Energy’s Lawrence Berkeley National Lab are working on a tool based on deep reinforcement learning models, called CIRCLES (for Congestion Impact Reduction via CAV-in-the-loop Lagrangian Energy Smoothing) to smooth traffic in congested cities.
The model simulates high volumes of traffic operating in everyday scenarios. It is being designed to connect to autonomous vehicles. The potential is for energy consumption and traffic jams to be reduced but cutting down on stop-and-go-traffic. Air quality could be improved by use of deep learning algorithms in combination with satellite images with traffic information obtained from smartphones and environmental IoT sensors.
Automatic license plate reader software, such as deployed by Rekor, can be used to recognize vehicles for real-time detection of crimes and violations.
In emotion detection, systems powered by AI can detect human emotions without visual input. Researchers at MIT have developed EQ Radio, a system that learns to identify human emotions based on heartbeat data collected by wireless signals. This technology could be used potentially by smart homes to detect if a resident is experiencing a heart attack, according to a recent review of cutting edge AI applications from The Burnie Group.
In geospatial analytics, computer vision is used to gather and compare satellite imagery with historical data to develop insights into economic trends. Orbital Insight, for example, can predict retail sales based on satellite images of retail store parking lots.
Read the source articles and information in Nature Neuroscience., Popular Mechanics, techopedia, at CIRCLES and from The Burnie Group.