GPT-4 Technical Report

OpenAI Josh Achiam,Steven Adler,S. Agarwal,L. Ahmad,Ilge Akkaya,Florencia Leoni Aleman,D. Almeida,Janko Altenschmidt,S. Altman,Shyamal Anadkat,Red Avila,Igor Babuschkin,S. Balaji,Valerie Balcom,Paul Baltescu,Haim-ing Bao,Mo Bavarian,Jeff Belgum,Irwan Bello,Jake Berdine,Gabriel Bernadett-Shapiro,Christopher Berner,Lenny Bogdonoff,O. Boiko,Made-laine Boyd,Anna-Luisa Brakman,Greg Brockman,Tim Brooks,Miles Brundage,Kevin Button,Trevor Cai,Rosie Campbell,Andrew Cann,Brittany Carey,Chelsea Carlson,Rory Carmichael,Brooke Chan,Che Chang,Fotis Chantzis,Derek Chen,Sully Chen,Ruby Chen,Jason Chen,Mark Chen,Benjamin Chess,Chester Cho,Casey Chu,Hyung Won Chung,Dave Cummings,Jeremiah Currier,Yunxing Dai,Cory Decareaux,Thomas Degry,Noah Deutsch,Damien Deville,Arka Dhar,David Dohan,Steve Dowling,Sheila Dunning,Adrien Ecoffet,Atty Eleti,Tyna Eloundou,David Farhi,L. Fedus,Niko Felix,S. Fishman,Juston Forte,Is-abella Fulford,Leo Gao,Elie Georges,C. Gibson,Vik Goel,Tarun Gogineni,Gabriel Goh,Raphael Gontijo-Lopes,J. Gordon,Morgan Grafstein,Scott Gray,Ryan Greene,Joshua Gross,S. Gu,Yufei Guo,Chris Hallacy,Jesse Han,J. Harris,Yuchen He,Mike Heaton,Johannes Heidecke,Chris Hesse,Alan Hickey,W. Hickey,P. Hoeschele,Brandon Houghton,Kenny Hsu,Shengli Hu,Xin Hu,Joost Huizinga,Shantanu Jain,Shawn Jain,Joanne Jang,Angela Jiang,R. Jiang,Haozhun Jin,Denny Jin,Shino Jomoto,Billie Jonn,Heewoo Jun,Tomer Kaftan,Lukasz Kaiser,Ali Kamali,I. Kanitscheider,N. Keskar,Tabarak Khan,Logan Kilpatrick,Jong Wook Kim,Christina Kim,Yongjik Kim,Hendrik Kirchner,J. Kiros,Matthew Knight,Daniel Kokotajlo,Lukasz Kondraciuk,Andrew Kondrich,Aris Konstantinidis,Kyle Kosic,Gretchen Krueger,Vishal Kuo,Michael Lampe,Ikai Lan,Teddy Lee,Jan Leike,Jade Leung,Daniel Levy,Chak Li,Rachel Lim,Molly Lin,Stephanie L. Lin,Ma-teusz Litwin,Theresa Lopez,Ryan Lowe,Patricia Lue,A. Makanju,Kim Malfacini,Sam Manning,Todor Markov,Yaniv Markovski,Bianca Martin,Katie Mayer,Andrew Mayne,Bob McGrew,S. McKinney,Christine McLeavey,Paul McMillan,Jake McNeil,David Medina,Aalok Mehta,Jacob Menick,Luke Metz,Andrey Mishchenko,Pamela Mishkin,Vinnie Monaco,Evan Morikawa,Daniel P. Mossing,Tong Mu,M. Murati,O. Murk,David M'ely,Ashvin Nair,Reiichiro Nakano,Rajeev Nayak,Arvind Neelakantan,R. Ngo,Hyeonwoo Noh,Ouyang Long,Cullen O'Keefe,J. Pachocki,A. Paino,Joe Palermo,Ashley Pantuliano,Giambattista Parascandolo,J. Parish,Emy Parparita,Alexandre Passos,Mikhail Pavlov,Andrew Peng,Adam Perelman,Filipe de Avila Belbute Peres,Michael Petrov,Henrique Pondé de Oliveira Pinto,Michael Pokorny,Michelle Pokrass,Vitchyr H. Pong,Tolly Powell,Alethea Power,Boris Power,Elizabeth Proehl,Raul Puri,Alec Radford,Jack W. Rae,Aditya Ramesh,Cameron Raymond,F. Real,Kendra Rimbach,Carl Ross,Bob Rotsted,Henri Roussez,N. Ryder,M. Saltarelli,Ted Sanders,Shibani Santurkar,G. Sastry,Heather Schmidt,David Schnurr,John Schulman,Daniel Selsam,Kyla Sheppard,T. Sherbakov,Jessica Shieh,S. Shoker,Pranav Shyam,Szymon Sidor,Eric Sigler,M. Simens,Jordan Sitkin,Katarina Slama,Ian Sohl,Benjamin Sokolowsky,Yang Song,N. Staudacher,F. Such,Natalie Summers,I. Sutskever,Jie Tang,N. Tezak,Madeleine Thompson,P. Tillet,Amin Tootoonchian,Elizabeth Tseng,Preston Tuggle,Nick Turley,Jerry Tworek,Juan Felipe Cer'on Uribe,Andrea Vallone,Arun Vijayvergiya,Chelsea Voss,Carroll L. Wainwright,Justin Wang,Alvin Wang,Ben Wang,Jonathan Ward,Jason Wei,CJ Weinmann,Akila Welihinda,Peter Welinder,Jiayi Weng,Lilian Weng,Matt Wiethoff,Dave Willner,Clemens Winter,Samuel Wolrich,Hannah Wong,Lauren Workman,Sherwin Wu,Jeff Wu,Michael Wu,Kai Xiao,Tao Xu,Sarah Yoo,Kevin Yu,Qim-ing Yuan,Wojciech Zaremba,Rowan Zellers,Chong Zhang,Marvin Zhang,Shengjia Zhao,Tianhao Zheng,Juntang Zhuang,William Zhuk,Barret Zoph

Published 2023 in Unknown venue

ABSTRACT

We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.

PUBLICATION RECORD

  • Publication year

    2023

  • Venue

    Unknown venue

  • Publication date

    2023-03-15

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

  • No references are available for this paper.

Showing 0-0 of 0 references · Page 1 of 1

CITED BY

Showing 1-100 of 22527 citing papers · Page 1 of 226