Google’s Digital News Initiative has committed £622,000 ($805,000) to fund an automated news writing initiative for U.K.-based news agency, The Press Association. The money will help pay for the creation of Radar (Reporters And Data And Robots), snappily named software designed to generate upwards of 30,000 local news stories a month.
The Press Association has enlisted U.K.-based news startup Urbs Media for the task of creating a piece of software that turns news data into palatable content. Once up and running, the team is hoping the software will be able to fill in some of the gaps that are currently being under-serviced as the universal financial strain being experienced by newsrooms around the world deepens.
In a news release heralding the financial commitment, Press Association Editor-in-Chief Peter Clifton called the move a “genuine game-changer,” stressing that the partnership will focus on stories that might not otherwise be written up as local newspapers continue to die off in this massive fourth-estate extinction. Of course, he was also quick to add that the move won’t do away with the human touch entirely.
This is an awesome overview of AI and related terms and technologies by Tim Appenzeller, www.sciencemag.org
Just what do people mean by artificial intelligence (AI)? The term has never had clear boundaries. When it was introduced in 1956, it was taken broadly to mean making a machine behave in ways that would be called intelligent if seen in a human.
Big data has met its match. In field after field, the ability to collect data has exploded—in biology, with its burgeoning databases of genomes and proteins; in astronomy, with the petabytes flowing from sky surveys; in social science, tapping millions of posts and tweets that ricochet around the internet. The flood of data can overwhelm human insight and analysis, but the computing advances that helped deliver it have also conjured powerful new tools for making sense of it all.
In a revolution that extends across much of science, researchers are unleashing artificial intelligence (AI), often in the form of artificial neural networks, on the data torrents. Unlike earlier attempts at AI, such “deep learning” systems don’t need to be programmed with a human expert’s knowledge. Instead, they learn on their own, often from large training data sets, until they can see patterns and spot anomalies in data sets that are far larger and messier than human beings can cope with.
AI isn’t just transforming science; it is speaking to you in your smartphone, taking to the road in driverless cars, and unsettling futurists who worry it will lead to mass unemployment. For scientists, prospects are mostly bright: AI promises to supercharge the process of discovery.
Unlike a graduate student or a postdoc, however, neural networks can’t explain their thinking: The computations that lead to an outcome are hidden. So their rise has spawned a field some call “AI neuroscience”: an effort to open up the black box of neural networks, building confidence in the insights that they yield.
An important recent advance in AI has been machine learning, which shows up in technologies from spellcheck to self-driving cars and is often carried out by computer systems called neural networks. Any discussion of AI is likely to include other terms as well.
ALGORITHM A set of step-by-step instructions. Computer algorithms can be simple (if it’s 3 p.m., send a reminder) or complex (identify pedestrians).
BACKPROPAGATION The way many neural nets learn. They find the difference between their output and the desired output, then adjust the calculations in reverse order of execution.
BLACK BOX A description of some deep learning systems. They take an input and provide an output, but the calculations that occur in between are not easy for humans to interpret.
DEEP LEARNING How a neural network with multiple layers becomes sensitive to progressively more abstract patterns. In parsing a photo, layers might respond first to edges, then paws, then dogs.
EXPERT SYSTEM A form of AI that attempts to replicate a human’s expertise in an area, such as medical diagnosis. It combines a knowledge base with a set of hand-coded rules for applying that knowledge. Machine-learning techniques are increasingly replacing hand coding.
GENERATIVE ADVERSARIAL NETWORKS A pair of jointly trained neural networks that generates realistic new data and improves through competition. One net creates new examples (fake Picassos, say) as the other tries to detect the fakes.
MACHINE LEARNING The use of algorithms that find patterns in data without explicit instruction. A system might learn how to associate features of inputs such as images with outputs such as labels.
NATURAL LANGUAGE PROCESSING A computer’s attempt to “understand” spoken or written language. It must parse vocabulary, grammar, and intent, and allow for variation in language use. The process often involves machine learning.
NEURAL NETWORK A highly abstracted and simplified model of the human brain used in machine learning. A set of units receives pieces of an input (pixels in a photo, say), performs simple computations on them, and passes them on to the next layer of units. The final layer represents the answer.
NEUROMORPHIC CHIP A computer chip designed to act as a neural network. It can be analog, digital, or a combination.
PERCEPTRON An early type of neural network, developed in the 1950s. It received great hype but was then shown to have limitations, suppressing interest in neural nets for years.
REINFORCEMENT LEARNING A type of machine learning in which the algorithm learns by acting toward an abstract goal, such as “earn a high video game score” or “manage a factory efficiently.” During training, each effort is evaluated based on its contribution toward the goal.
STRONG AI AI that is as smart and well-rounded as a human. Some say it’s impossible. Current AI is weak, or narrow. It can play chess or drive but not both, and lacks common sense.
SUPERVISED LEARNING A type of machine learning in which the algorithm compares its outputs with the correct outputs during training. In unsupervised learning, the algorithm merely looks for patterns in a set of data.
TENSORFLOW A collection of software tools developed by Google for use in deep learning. It is open source, meaning anyone can use or improve it. Similar projects include Torch and Theano.
TRANSFER LEARNING A technique in machine learning in which an algorithm learns to perform one task, such as recognizing cars, and builds on that knowledge when learning a different but related task, such as recognizing cats.
TURING TEST A test of AI’s ability to pass as human. In Alan Turing’s original conception, an AI would be judged by its ability to converse through written text.
Ideas make the world go round, but coming up with great ones isn’t always easy. Here are ten ways you can prompt your brain to get those ideas flowing. “The best way to come up with new ideas is to get really bored” – prolific author Neil Gaiman says.
In a similar vein, Isaac Asimov suggests that you should spend more time alone with your thoughts, because the presence of others can inhibit your creativity. Spend time with just your thoughts — distracted by neither other people or technology — and your ideas will be unfettered. That’s not to say that group brainstorming sessions don’t have their merits — they just have to be done right — and allow for individual brainstorming time as well.
Coffee keeps you alert, but according to science it’s not the best drink for boosting creativity. Beer (or other alcoholic drinks) make you less focused on the things around you, which — like point number nine above, isolating yourself — can lead to big ideas. Everything in moderation, of course.
Mind maps prompt you to make connections between different concepts, encouraging the creation of new ideas. By diagramming your thoughts, you’ll be able to go both deeper and broader with your subject and uncover ideas that you might have missed with regular text notes.
Ever try to fall asleep and suddenly your brain goes into overdrive thinking of new ideas? It’s a common phenomenon where your subconscious mind starts to take over because you’re finally relaxed and not distracted by anything else. Before bed is a good time to do a brain dump.
You might also be able to generate more ideas if you interrupt your sleep cycle: Wake up 60 minutes into a 90-minute cycle when your brain will be groggier and less likely to censor ideas you have percolating.
Great ideas happen in the weirdest places — like the shower. That environment puts us in a semi-meditative state where our minds are free to wander. There’s no guarantee you’ll get new ideas from your shower, but if you’re feeling stuck, might as well go get clean and see what happens.
If you can’t take a shower, similar activities that release a lot of dopamine in your brain, such as listening to music, can also boost your idea generation.
We are our worst idea censors. We might have too many ideas that we never pursue (and then feel guilty about) or we too quickly label some ideas as stupid. Think twice before rejecting a creative idea that you might be uncomfortable with, and perhaps try keeping a “new ideas document” that encourages you to write down every idea — without guilt or criticism.