Google’s Digital News Initiative has committed £622,000 ($805,000) to fund an automated news writing initiative for U.K.-based news agency, The Press Association. The money will help pay for the creation of Radar (Reporters And Data And Robots), snappily named software designed to generate upwards of 30,000 local news stories a month.
The Press Association has enlisted U.K.-based news startup Urbs Media for the task of creating a piece of software that turns news data into palatable content. Once up and running, the team is hoping the software will be able to fill in some of the gaps that are currently being under-serviced as the universal financial strain being experienced by newsrooms around the world deepens.
It’s similar to a model The Associated Press has employed for a while now here in the States, mostly tackling financial and niche sports stories. A quick Google News search of the tell-tale tagline “This story was generated by Automated Insights” reveals hits from news outlets across the U.S.
In a news release heralding the financial commitment, Press Association Editor-in-Chief Peter Clifton called the move a “genuine game-changer,” stressing that the partnership will focus on stories that might not otherwise be written up as local newspapers continue to die off in this massive fourth-estate extinction. Of course, he was also quick to add that the move won’t do away with the human touch entirely.
This is an awesome overview of AI and related terms and technologies by Tim Appenzeller, www.sciencemag.org
Just what do people mean by artificial intelligence (AI)? The term has never had clear boundaries. When it was introduced in 1956, it was taken broadly to mean making a machine behave in ways that would be called intelligent if seen in a human.
Big data has met its match. In field after field, the ability to collect data has exploded—in biology, with its burgeoning databases of genomes and proteins; in astronomy, with the petabytes flowing from sky surveys; in social science, tapping millions of posts and tweets that ricochet around the internet. The flood of data can overwhelm human insight and analysis, but the computing advances that helped deliver it have also conjured powerful new tools for making sense of it all.
In a revolution that extends across much of science, researchers are unleashing artificial intelligence (AI), often in the form of artificial neural networks, on the data torrents. Unlike earlier attempts at AI, such “deep learning” systems don’t need to be programmed with a human expert’s knowledge. Instead, they learn on their own, often from large training data sets, until they can see patterns and spot anomalies in data sets that are far larger and messier than human beings can cope with.
AI isn’t just transforming science; it is speaking to you in your smartphone, taking to the road in driverless cars, and unsettling futurists who worry it will lead to mass unemployment. For scientists, prospects are mostly bright: AI promises to supercharge the process of discovery.
Unlike a graduate student or a postdoc, however, neural networks can’t explain their thinking: The computations that lead to an outcome are hidden. So their rise has spawned a field some call “AI neuroscience”: an effort to open up the black box of neural networks, building confidence in the insights that they yield.
An important recent advance in AI has been machine learning, which shows up in technologies from spellcheck to self-driving cars and is often carried out by computer systems called neural networks. Any discussion of AI is likely to include other terms as well.
ALGORITHM A set of step-by-step instructions. Computer algorithms can be simple (if it’s 3 p.m., send a reminder) or complex (identify pedestrians).
BACKPROPAGATION The way many neural nets learn. They find the difference between their output and the desired output, then adjust the calculations in reverse order of execution.
BLACK BOX A description of some deep learning systems. They take an input and provide an output, but the calculations that occur in between are not easy for humans to interpret.
DEEP LEARNING How a neural network with multiple layers becomes sensitive to progressively more abstract patterns. In parsing a photo, layers might respond first to edges, then paws, then dogs.
EXPERT SYSTEM A form of AI that attempts to replicate a human’s expertise in an area, such as medical diagnosis. It combines a knowledge base with a set of hand-coded rules for applying that knowledge. Machine-learning techniques are increasingly replacing hand coding.
GENERATIVE ADVERSARIAL NETWORKS A pair of jointly trained neural networks that generates realistic new data and improves through competition. One net creates new examples (fake Picassos, say) as the other tries to detect the fakes.
MACHINE LEARNING The use of algorithms that find patterns in data without explicit instruction. A system might learn how to associate features of inputs such as images with outputs such as labels.
NATURAL LANGUAGE PROCESSING A computer’s attempt to “understand” spoken or written language. It must parse vocabulary, grammar, and intent, and allow for variation in language use. The process often involves machine learning.
NEURAL NETWORK A highly abstracted and simplified model of the human brain used in machine learning. A set of units receives pieces of an input (pixels in a photo, say), performs simple computations on them, and passes them on to the next layer of units. The final layer represents the answer.
NEUROMORPHIC CHIP A computer chip designed to act as a neural network. It can be analog, digital, or a combination.
PERCEPTRON An early type of neural network, developed in the 1950s. It received great hype but was then shown to have limitations, suppressing interest in neural nets for years.
REINFORCEMENT LEARNING A type of machine learning in which the algorithm learns by acting toward an abstract goal, such as “earn a high video game score” or “manage a factory efficiently.” During training, each effort is evaluated based on its contribution toward the goal.
STRONG AI AI that is as smart and well-rounded as a human. Some say it’s impossible. Current AI is weak, or narrow. It can play chess or drive but not both, and lacks common sense.
SUPERVISED LEARNING A type of machine learning in which the algorithm compares its outputs with the correct outputs during training. In unsupervised learning, the algorithm merely looks for patterns in a set of data.
TENSORFLOW A collection of software tools developed by Google for use in deep learning. It is open source, meaning anyone can use or improve it. Similar projects include Torch and Theano.
TRANSFER LEARNING A technique in machine learning in which an algorithm learns to perform one task, such as recognizing cars, and builds on that knowledge when learning a different but related task, such as recognizing cats.
TURING TEST A test of AI’s ability to pass as human. In Alan Turing’s original conception, an AI would be judged by its ability to converse through written text.
If it’s not clear by now, allow us to be the first ones to tell you: Artificial Intelligence [AI] is here to stay. There have been some mixed responses to this phenomenon. Forrester reports that AI will eliminate 16% of U.S. jobs by 2025, which understandably freaks some people out.
Recent Comments