Howwas your morning? Perhaps you woke up, did a little online shopping while brewing your coffee, posted some pictures on social media over breakfast, glanced over the world news, drove to work, checked your email, picked up your mail, and opened up your latest issue of ACS Central Science. Pretty unremarkable, right? Maybe, but in the few hours that you have been awake you have most likely interacted with numerous instances of machine learning algorithms ticking away just below the surface of our everyday lives. The term “machine learning” may be defined as algorithms that allow computers to learn to perform tasks, identify relationships, and discern patterns without the need for humans to provide the underlying instructions. Conventional algorithms operate by sequentially executing a preprogrammed set of rules to achieve a particular outcome. Machine learning algorithms, by contrast, are instead provided with a set of examples by the user and train themselves to learn the rules f rom the data. This powerful idea dates back to at least the 1950s, but has only been fully realized in recent years with the advent of sufficiently large digital data sets over which to perform trainingfor example, Google photo albums, Amazon shopping lists, Netflix viewing historiesand sufficiently powerful computer hardware and algorithms to perform the trainingtypically powerful graphics cards developed for the computer game industry that can be hijacked to conduct machine learning. This paradigm has revolutionized multiple domains of science and technology, with different variants of machine learning dominating, and in some cases enabling, multifarious applications such as retail recommendation engines, facial detection and recognition, language translation, autonomous and assisted driving, spam filtering, and character recognition. The success of these algorithms may be largely attributed to their enormous flexibility and power to extract patterns, correlations, and structure from data. These features can be nonintuitive and complicated functions that are difficult for humans to parse, or exist as weak signals that are only discernible from large, high-dimensional data sets that defy conventional analysis techniques. There remains a fundamental difference between artificial and human intelligenceno machine has yet exhibited generic human cognition, and for now, the Turing Test remains intact1but machine performance in certain specific tasks is unequivocally superhuman. A prominent example is provided by Google’s Go-playing computer program AlphaGo Zero. This program was provided only with the rules of the ancient board game and learned to play by playing games against itself in a form of reinforcement learning. After just 3 days of training, AlphaGo Zero roundly defeated the best previous best algorithm (AlphaGo Lee) that had itself beaten the 18-time (human) world champion Lee Sedol 100 games to 0. Remarkably, AlphaGo Zero employed previously unknown strategies of play that had never been discovered by human players over the 2500 year history of the game.
ACS Central Science Virtual Issue on Machine Learning
Published 2018 in ACS Central Science
ABSTRACT
PUBLICATION RECORD
- Publication year
2018
- Venue
ACS Central Science
- Publication date
2018-08-08
- Fields of study
Medicine, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar, PubMed
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-26 of 26 references · Page 1 of 1
CITED BY
Showing 1-18 of 18 citing papers · Page 1 of 1