(testing signal)

Tag: OpenAI

OpenAI Residency

As part of our effort to support and develop AI talent, we're excited to announce the OpenAI Residency. This new program offers a pathway to a full-time role at OpenAI for researchers and engineers who don't currently focus on artificial intelligence. The Residency will focus on recruiting from underrepresented groups in technology.
The program is an iteration of our former Scholars and Fellows programs. The Residency shifts the focus away from curriculum-based learning, instead giving Residents an opportunity to work collaboratively alongside OpenAI teams on active projects.
The first…

OpenAI’s API Now Available with No Waitlist

OpenAI is committed to the safe deployment of AI. Since the launch of our API, we’ve made deploying applications faster and more streamlined while adding new safety features. Our progress with safeguards makes it possible to remove the waitlist for GPT-3. Starting today, developers in supported countries can sign up and start experimenting with our API right away.Improvements to our API over the past year include the Instruct Series models that adhere better to human instructions, specialized endpoints for more truthful question-answering, and a free content filter to help developers…

Exclusive: OpenAI summarizes KDnuggets

OpenAI has recently done amazing work summarizing full-length books. We have asked OpenAI to summarize two recent KDnuggets posts, and here are the results. The have amazing human-like quality, but see if you can spot a glaring mistake in the summary of the 2nd blog.

Scaling human oversight of AI systems for difficult tasks – OpenAI approach

The foundational idea of Artificial Intelligence is that it should demonstrate human-level intelligence. So, unless a model can perform as a human might do, its intended purpose is missed. Here, recent OpenAI research into full-length book summarization focuses on generating results that make sense to humans with state-of-the-art results that leverage scalable AI-enhanced human-in-the-loop feedback.

🧙🏻‍♂️ Edge#130: The ML Engineering Magic Behind OpenAI Codex

What’s New in AI, a deep dive into one of the freshest research papers or technology frameworks that is worth your attention. Our goal is to keep you up to date with new developments in AI to complement the concepts we debate in other editions of our newsletter.

Share

💥 What’s New in AI: The ML Engineering Magic Behind OpenAI Codex

OpenAI Codex is one of the most impressive deep learning models ever created. Released a few months ago, Codex can generate code based on natural language sentences. The model is proficient in more than a dozen programming languages and can produce code for fairly complex instructions. If the research behind Codex is impressive, even more impressive is the machine learning (ML) engineering work put in place to develop such a model.… Read more...

Sometimes Bigger Machine Learning Models and Larger Datasets Can Hurt Performance

OpenAI Double Descent Hypothesis research shows a phenomenon that challenges both traditional statistical learning theory and conventional wisdom in machine learning practitioners.Source: https://www.youtube.com/watch?v=Kih-VPHL3gAI recently started an AI-focused educational newsletter, that already has over 100,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning…

https://pub.towardsai.net/sometimes-bigger-machine-learning-models-and-larger-datasets-can-hurt-performance-ae26ab530e67?source=rss—-98111c9905da—4

Back to the Future with Codex and COBOL

Image by Pete Birkenshaw via Wikipedia (CC BY 2.0)Can OpenAI’s code-generation system deal with code from the punch card era?

COBOL has been around for over sixty years. Despite concerted efforts to migrate more and more COBOL programs to modern languages, it shows no sign of disappearing any time soon. There is still a lot of COBOL running — close to 1/4 trillion lines of it by one recent estimate. Is there a way that we could speed up moving this huge body of code from the era of punch cards into the 21st century?

Back when GPT-3 came out in 2020, I made a half-hearted attempt to get it to translate COBOL to Python, but I didn’t get any useful results. Now that I have access to Codex, I decided to see whether a model that specializes in code generation could do a better job.… Read more...

Surpassing Trillion Parameters and GPT-3 with Switch Transformers – a path to AGI? – KDnuggets

Switch Transformers Have Unlocked Success in Machine Learning

It is practically a trope in certain types of science fiction for an advanced computer system to suddenly “awaken” and become self-aware, often accompanied by vastly improved capabilities when passing an unseen threshold in computing capacity.

Many prominent members of the AI community believe that this common element of AI in sci-fi is as much a literal prophecy as a plot device, and few are more outspoken about the promise of scale as a primary (if not the sole) driver of artificial general intelligence than Ilya Sutskever and Greg Brockman at OpenAI. Even Richard Sutton made the strong assertion that compute is king in his essay “The Bitter Lesson.”Read more...

An OpenAI Model Learns to Summarize Books

According to VentureBeat, Google, Microsoft, and Facebook are all working on similar tools to deliver text summaries to users. While some individuals use it to collaborate and co-write novels with AI, it may also lead to more news aggregation. Google, Facebook, and a select few other companies are now the gatekeepers and arbiters for the rest of the internet.

Devaluation of News and Knowledge

To earn ad revenue, news companies need to fit their articles to an algorithm, ensuring that people will click. To save on costs, companies are already using automated systems to gather and organize content, or suggest headlines. This is part of journalist Franklin Foer’s argument in his book World Without Mind: The Existential Threat of Big Tech:

“Magazines and newspapers used to think of themselves as something coherent — an issue, an edition, an institution.

Read more...

GitHub Copilot and the Rise of AI Language Models in Programming Automation

Should I Use Github Copilot?

 
If you are a software engineer, or count any of them among your circle of acquaintances, then you’re probably already aware at some level of Copilot. Copilot is GitHub’s new deep learning code completion tool.

Autocomplete tools for programmers are nothing new, and Copilot is not even the first to make use of deep learning nor even the first to use a GPT transformer. After all, TabNine sprung out of a summer project by OpenAI alum Jacob Jackson and makes use of the GPT-2 general purpose transformer.

Microsoft (which owns GitHub) has packaged their own IntelliSense code completion tool with programming products since at least 1996, and autocomplete and text correction has been an active area of research since the 1950s.

Read more...

GPT-3 and GPT-4 Could Ruin the Future Internet

This is an Op-ed about the future of the internet and, while speculative, it’s an example and an attempt to demonstrate how Artificial Intelligence at scale in a human would or could have disastrous impacts without AI regulation and AI ethics to protect us.

GPT-3 stands for Generative Pre-trained Transformer. As you likely already know GPT-3 is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by Microsoft-funded OpenAI (that was supposed to be a not for profit firm).

This Is How a Less Human World Manifests

In 2021 we’ve had a NLP-explosion year in terms of Artificial Intelligence activity.

Read more...

GPT-4 Will Have 100 Trillion Parameters — 500x the Size of GPT-3

100 trillion parameters is a lot. To understand just how big that number is, let’s compare it with our brain. The brain has around 80–100 billion neurons (GPT-3’s order of magnitude) and around 100 trillion synapses.

GPT-4 will have as many parameters as the brain has synapses.

The sheer size of such a neural network could entail qualitative leaps from GPT-3 we can only imagine. We may not be able to even test the full potential of the system with current prompting methods.

However, comparing an artificial neural network with the brain is a tricky business. The comparison seems fair but that’s only because we assume artificial neurons are at least loosely based on biological neurons. A recent study published in Neuron suggests otherwise.

Read more...

The Future of Decision-Making in AI

Generally, I am not a fan of the terminology Artificial intelligence (AI). It is too broad, and non-technical minded people imagine that AI is a singular entity that makes decisions independently. Additionally, because AI is a popular term, I have seen examples where companies advertise themselves using AI when they are actually “just” using linear regression. Throughout the last 80 years, the term has gotten a bad rap in pop culture because of all the doomsday science-fiction stories and movies. Countless times we have seen science-fiction turning into science-faction, and with the advent of the text generator GPT-3 by OpenAI, it sure looks like we are on track. So, will this also happen here?

Nope.

Well, at least not for now.

Read more...

The cliché writes back

Since its inception in 2015, the research laboratory OpenAI – an Elon Musk-backed initiative that seeks to build human-friendly artificial intelligence – has developed a series of powerful ‘language models’, the latest being GPT-3 (third-generation Generative Pre-trained Transformer). A language model is a computer program that simulates human language. Like other simulations, it hovers between the reductive (reality is messier and more unpredictable) and the visionary (to model is to create a parallel world that can sometimes make accurate predictions about the real world). Such language models lie behind the predictive suggestions for emails and text messages. Gmail’s Smart Compose can complete ‘I hope this …’ with ‘… email finds you well’.

Read more...

Behind OpenAI Codex: 5 Fascinating Challenges About Building Codex You Didn’t Know About

Source: https://bdtechtalks.com/2021/07/15/openai-codex-ai-programming/

A couple of weeks ago, OpenAI astonished the artificial intelligence(AI) world with the release of Codex, a massive model that can translate natural language into code. Codex can effectively generate end to end from basic language instructions. If you don’t believe me, you should watch this video which can be considered one of the best AI demos of all time 😉

Video Credit: OpenAI

A lot has been written about Codex’s capabilities since its initial launch.

However, I have been more intrigued by the small requirements that become incredibly relevant to build a model of this magnitude. Deep diving into Codex, there are a few interesting things I found that thought would be good to highlight:

1.

Read more...

An Introduction to Reinforcement Learning with OpenAI Gym, RLlib, and Google Colab

An introductory tutorial on reinforcement learning with OpenAI Gym, RLlib, and Google Colab

This tutorial will use reinforcement learning (RL) to help balance a virtual CartPole. The video above from PilcoLearner shows the results of using RL in a real-life CartPole environment.
Read more...

OpenAI Codex

We’ve created an improved version of OpenAI Codex, our AI system that translates natural language to code, and we are releasing it through our API in private beta starting today. Codex is the model that powers GitHub Copilot, which we built and launched in partnership with GitHub a month ago. Proficient in more than a dozen programming languages, Codex can now interpret simple commands in natural language and execute them on the user’s behalf—making it possible to build a natural…

GitHub Copilot And The Unfulfilled Promises Of An Artificial Intelligence Future

In late June of 2021, GitHub launched a ‘technical preview’ of what they termed GitHub Copilot, described as an ‘AI pair programmer which helps you write better code’. Quite predictably, responses to this announcement varied from glee at the glorious arrival of our code-generating AI overlords, to dismay and predictions of doom and gloom as before long companies would be firing software developers en-masse.

As is usually the case with such controversial topics, neither of these extremes are even remotely close to the truth. In fact, the OpenAI Codex machine learning model which underlies GitHub’s Copilot is derived from OpenAI’s GPT-3 natural language model,  and features many of the same stumbles and gaffes which GTP-3 has.

Read more...

GitHub Copilot Open Source Alternatives

Recently, GitHub publicly unveiled Copilot, the preview of its “AI pair programmer,” a code completion style tool designed to provide line or function suggestions in your IDE. It has certainly made waves in the world of programming and beyond, and you have likely heard at least something about it.

But Copilot is more than simple autocomplete and is more context aware than other code assistants. Powered by OpenAI’s Codex AI system, Copilot contextualizes a situation using docstrings, function names, comments, and preceding code to best generate and suggest what it determines to be the most appropriate code. Copilot is designed to improve over time, “learning” from how developers use it.

Read more...

OpenAI's New Code Generator: GitHub Copilot (and Codex)

Watch the video and support me on YouTubeYou’ve probably heard of the recent Copilot tool by GitHub, which generates code for you. You can see this tool as an auto-complete++ for code. You give it the name of a function along with some additional info, and it generates the code for you quite accurately! But it won’t just autocomplete your function. Rather, it will try to understand what you are trying to do to generate it. It is also able to generate much bigger and more complete…

Improving Language Model Behavior by Training on a Curated Dataset

Read paper
We've found we can improve language model behavior with respect to specific behavioral values by fine-tuning on a curated dataset of <100 examples of those values. We also found that this process becomes more effective as models get larger. While the technique is still nascent, we’re looking for OpenAI API users who would like to try it out and are excited to find ways to use these and other techniques in production use cases.
Language models can output almost any kind of text,…