Catch us every day at 1pm at our virtual booth for live demos and discussions on GPT-3 (via Zoom).
All times Pacific (GMT-8)
All times Pacific (GMT-8)
-
11am–12pm11am–12pmVirtual BoothGirish Sastry & Henrique Ponde de Oliveira Pinto
-
12–1pm12–1pmVirtual BoothBrandon Houghton & Amanda Askell
-
1–1:30pm1–1:30pmLive Paper Discussion and Q&A (via Zoom)Language Models are Few-Shot Learners” with Ben Mann
During this live Q&A, Ben will be discussing his and OpenAI’s major contributions in this paper, as well as where we fell short. His work was mainly on training data, eval memorization, and the eval suite. He will offer deep dives on these sections.
-
1–2pm1–2pmVirtual BoothReiichiro Nakano & Daniel Ziegler
-
1:30–2pm1:30–2pmVirtual Booth ChatGPT-3 with Ben Mann and members of the GPT-3 team
-
2–3pm2–3pmVirtual BoothPrafulla Dhariwal & Alex Paino
-
2–3pm2–3pmVirtual Booth ChatGPT-3 and Jukebox with Prafulla Dhariwal
Prafulla has worked on both Language Models are Few-Shot Learners and Jukebox: A Generative Model for Music focusing on generative models, in particular on scaling them to high dimensional data like audio and images. His most recent work is Jukebox, which showcases the ability of neural nets to produce music with singing.
-
9–11pm9–11pmPoster Session, In Poster Session 0, #49Language Models are Few-Shot Learners” with Ben Mann and Nick Ryder
We demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even becoming competitive with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting.
All times Pacific (GMT-8)
All times Pacific (GMT-8)
-
11am–12pm11am–12pmVirtual BoothVineet Kosaraju & Ilge Akkaya
-
12–1pm12–1pmVirtual BoothKenneth Stanley & Ariel Herbert-Voss
-
1–1:35pm1–1:35pmLive Demo and Q&A (via Zoom)OpenAI's API Playground: GPT-3 with Andrew Mayne
Andrew will be demonstrating the capabilities of the API via the interactive playground and tools for Semantic Search.
-
1–2pm1–2pmVirtual BoothJeff Clune & Gretchen Krueger
-
1:35–2pm1:35–2pmVirtual Booth ChatGPT-3 with Andrew Mayne and members of the Applied AI Team
-
2–3pm2–3pmVirtual BoothMatthias Plappert & Jacob Hilton
-
3:15–4pm3:15–4pmOffice HoursOpen-Endedness with Kenneth Stanley & Joel Lehman
Chat with members of our Open-Endedness team about anything related to their work and research at virtual Table #6 at the NeurIPS Social.
-
9–11pm9–11pmPoster Session, In Poster Session 2, #722Learning to Summarize with Human Feedback” with Ryan Lowe, Jeff Wu & Daniel Ziegler
As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences.
-
9–11pm9–11pmPoster Session, In Poster Session 3, #19045Emergent Reciprocity and Team Formation From Randomized Uncertain Social Preferences” with Bowen Baker
Reinforcement learning agents typically fall into uncooperative equilibria when trained in social dilemma environments. We explore whether randomized and uncertain social preferences can pressure agents into more cooperative equilibria.
All times Pacific (GMT-8)
All times Pacific (GMT-8)
-
11am–12pm11am–12pmVirtual BoothChris Hallacy & Chris Hesse
-
12–1pm12–1pmVirtual BoothJoel Lehman & Bowen Baker
-
12–1pm12–1pmVirtual Booth ChatEmergent Reciprocity and Team Formation From Randomized Uncertain Social Preferences” with Bowen Baker
If you missed his poster session last night, come chat with Bowen at our booth!
-
1–1:30pm1–1:30pmLive Paper Discussion and Q&A (via Zoom)Language Models are Few-Shot Learners” with Ben Mann
Ben will describe OpenAI’s major contributions in this paper, as well as where we fell short. His work was mainly on training data, eval memorization, and the eval suite. He will offer deep dives on these sections.
-
1–2pm1–2pmVirtual BoothVedant Misra & Miles Brundage
-
2–3pm2–3pmVirtual BoothIngmar Kanitscheider & Diogo Moitinho de Almeida
-
1:30–2pm1:30–2pmVirtual Booth ChatGPT-3 with Ben Mann and members of the GPT-3 team
All times Pacific (GMT-8)
All times Pacific (GMT-8)
-
11am–12pm11am–12pmVirtual BoothRoger Jiang & Kamal Ndousse
-
12–1pm12–1pmVirtual BoothKarl Cobbe & Heewoo Jun
-
1–1:35pm1–1:35pmLive Demo and Q&A (via Zoom)OpenAI's API Playground: GPT-3 with Andrew Mayne
Andrew will be demonstrating the capabilities of the API via the interactive playground and tools for Semantic Search.
-
1–2pm1–2pmVirtual BoothLilian Weng & Jeff Wu
-
1:35–2pm1:35–2pmVirtual Booth ChatGPT-3 with Andrew Mayne and members of the Applied AI Team
-
2–3pm2–3pmVirtual BoothAlethea Power & Tao Xu
All times Pacific (GMT-8)
All times Pacific (GMT-8)
-
3am–12pm3am–12pmWorkshopMeta-Learning (MetaLearn 2020) with Jeff Clune
This workshop aims to bring together researchers from all the different communities and topics that fall under the umbrella of meta-learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta-learning as well as possible solutions.
-
8:30am–9pm8:30am–9pmWorkshopML Retrospectives, Surveys & Meta-Analyses (ML-RSA) with Ryan Lowe
The exponential growth of AI research has led to several papers floating on arxiv, making it difficult to review existing literature. Despite the huge demand, the proportion of survey & analyses papers published is very low due to reasons like lack of a venue and incentives. Our Workshop, ML-RSA provides a platform and incentivizes writing such types of papers. It meets the need of taking a step back, looking at the sub-field as a whole and evaluating actual progress.
-
9:45–10am9:45–10amTalk (Deep Reinforcement Learning Workshop)Asymmetric self-play for automatic goal discovery in robotic manipulation” with Lilian Weng
Lilian has been working on teaching robots to solve a wide variety of tasks via RL training in simulation and sim2real transfer to the physical world. Her recent work uses asymmetric self-play to train a single, goal-conditioned policy that can solve many robotic manipulation tasks, including tasks with previously unseen goals and objects. Come and chat with Lilian about the paper "Asymmetric self-play for automatic goal discovery in robotic manipulation.
All times Pacific (GMT-8)
All times Pacific (GMT-8)
-
8am–5:45pm8am–5:45pmWorkshop: Competition Track SundayIntroduction to the Procgen Benchmark, Procgen Competition with Karl Cobbe
-
11:30am–12:30pm11:30am–12:30pmDiscussion Panel (Navigating the Broader Impacts of AI Research Workshop)Responsible Publication: NLP Case Study with Miles Brundage
Join OpenAI’s Miles Brundage—with Bryan McCann, Colin Raffel, Natalie Schulter, Zeerak Waseem, Rosie Campbell—as they discuss responsible publication following growing concerns with both harmful research impact and research conduct in computer science and research published at NeurIPS.
-
11am–12pm11am–12pmPaper Discussion Spotlight Talk (Cooperative AI Workshop)Learning Social Learning” with Kamal Ndousse
The research on this paper grew out of the work Kamal did with OpenAI’s Scholars program, mentored by Natasha Jaques. In this paper (winner of a best paper award at the Cooperative AI Workshop), they show that independent RL agents can learn social policies that allow them to learn from experts in their environment. The learned social learning behavior allows them to accomplish tasks that would otherwise be prohibitively difficult, and to perform well in unseen social tasks.