Much of what we do as modern people-at work and beyond - is to process information and generate action. GPT-4 will massively speed your ability to do these things, and with greater breadth and scope. Within a few years, this copilot will fall somewhere between useful and essential to most professionals and many other sorts of workers.


At its essence, GPT-4 predicts flows of language. Trained on massive amounts of text taken from publicly available internet sources to recognize the relationships that most commonly exist between individual units of meaning (including full or partial words, phrases, and sentences), LLMs can, with great frequency, generate replies to users’ prompts that are contextually appropriate, linguistically facile, and factually correct.


“Think of ChatGPT as a blurry JPEG of all the text on the Web,” Chiang writes. “It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation.”


First, I’d argue that repackaging available information actually describes an enormous share of human innovation, artistic or otherwise.

More importantly, though, LLMs actually have and use fundamentally new powers of knowledge organization.

While the web now contains an unfathomable amount of information, much of it is siloed into billions of individual pages.


The takeaway: in your overall quest for authoritative information, GPT-4 helps you start somewhere much closer to the finish line than if you didn’t have it as a resource.

More importantly, it possesses this capability because it is able to access and synthesize the web’s information in a significantly different way from existing information resources like Wikipedia or traditional search engines. Essentially, GPT-4 arranges vast, unstructured arrays of human knowledge and expression into a more connected and interoperable network, thus amplifying humanity’s ability to compound its collective ideas and impact.


Probe it by asking, What assumptions are you making?” or “Can you show your reasoning?”


The irony here, of course, is that GPT-4 might not get funded with that pitch. The vision of AI it presents is both nuanced and strikingly different from how AI has generally been portrayed over the years.


If you simply let GPT-4 do all the work, with no human oversight or engagement, it’s a less powerful tool. It’s still a very human tool, of course, because human texts are the basis for its generations.

But when human users treat GPT-4 as a co-pilot or a collaborative partner, it becomes far more powerful. You compound GPT-4’s computational generativity, efficiency, synthetic powers, and capacity to scale with human creativity, human judgment, and human guidance.


How does it fit into humanity’s age-old quest to make life more meaningful and prosperous through technological innovation? To educate ourselves more effectively, ensure justice for everyone, and increase our opportunities for self-determination and self-expression?


As homework, they bring the ChatGPT prompts they tried and the responses they received for class discussion. They must turn in their final papers with a log of changes to the machine’s output.


Much as Google devalued the steel-trap memory, electronic calculators speeded up complex calculations, Wikipedia displaced the printed encyclopedia, and online databases diminished the importance of a vast physical library, so, too, platforms like ChatGPT will profoundly alter the most prized skills.

This process of leaving laborious tasks to our machines-and using the time thus saved to take on new and difficult tasks for ourselves.


If ChatGPT can do a job as well as a person, then humans shouldn’t duplicate those abilities; they must surpass them. The next task for higher education, then, is to prepare graduates to make the most effective use of the new tools and to rise above and go beyond their limitations.


In 1970, the typical calculator was too pricey for widespread use in schools, but they hit a tipping point in the mid-1970S. Many parents and teachers were alarmed at the influx of new tools; they worried that math skills would atrophy and students would simply cheat.


I can offer personalized feedback, adaptive content, data analysis, and interactive simulations that can help students develop their skills, as well as their curiosity and creativity. However, I cannot replace the human elements of education, such as empathy, motivation, socialization.


Instead of relying on me to provide the correct answers or solutions to problems, they could use me to generate multiple possible answers or solutions, and then ask students to compare, evaluate, and justify them. This way, they can foster students’ critical thinking and problem-solving skills, as well as their awareness of the uncertainty and complexity of real-world situations.


On the one hand, Al could offer new tools and inspiration for creative expression, enabling artists to explore new genres, styles, techniques, and combinations of media that might otherwise be inaccessible or challeng-ing.

On the other hand, Al could also pose some threats and challenges for creative people, such as undermining their originality, authenticity, and autonomy.


Now, my own belief is that GPTs (and other Als) will become essential tools for creative work of all kinds, somewhere between highly capable assistants and actual creative part-ners-by which I mean key participants in creating original ideas.


But don’t forget: the tool is not a substitute for your own creativity, skill, and judgment. You still have to write your own screenplay, polish it, sell it, and hope that it becomes a hit. And if it does, you can enjoy the fruits of your labor without worrying about OpenAl knocking on your door.


Ask a journalist the value that most shapes their work and there’s a good chance they’ll say “accuracy” (especially if they’re talking on the record). But there’s a reason journalism is often described as “the first rough draft of history”: in journalism, as in many industries, speed matters.

Whether journalists are reporting on wars, political campaigns, weather events, market conditions, or an extremely popular new restaurant, they’re forever battling the clock to gather information as quickly as they can to deliver some working theory of the truth to their audiences.

Sometimes, this imperative of speed means that journalists’ rough drafts of history are rough indeed. Context goes missing. Important aspects of a story have not yet surfaced.

So while most of us may think of journalism as a product, it’s ultimately a process-iterative and self-correcting. Ideally, tomorrow’s stories refine, clarify, and expand on today’s. Accuracy is a persistent defining value, but so is speed.


While the information it had provided was partially wrong, it was also mostly right. And, most importantly, GPT-4 produced this information extremely quickly.

When I’d Googled the same kind of information, it offered me dozens of links, some of which looked promising, others not. The Wikipedia experience differed in details but not in results.

With GPT-4, though, its capacity to instantly synthesize information from a wide range of sources meant that I received exactly the kind of list I’d been envisioning within seconds.

This list contained errors, but that was OK because-and this is a key point—I wasn’t looking for or expecting a finished product. I was looking for an informed starting point, a rough map of the territory I wanted to explore, to help me quickly get a sense of which questions I should be asking.


The experience itself is so responsive and self-propelling that a kind of intellectual escalation kicks in: asking one question makes you want to ask ten.


I also believe it’s equally important-possibly even more important-to flood the zone with truth.

What does that mean? Essentially, we have to make accurate, transparent, and truthful information extremely easy to find, for anyone who wants to find it.

In many respects, Wikipedia is a good example of what I’m envisioning here. It’s a massive archive of fact-based information, with transparent and rigorously enforced processes for adding and editing the information it contains.


What if every article published on NYTimes.com or FoxNews. com had a “Fact Check” button on it, just as they now have buttons to email or tweet an article?


In the early days of his channel, Codysseus was vigilant about answering every viewer comment within twenty-four hours. He quickly learned that maintaining this high level of responsiveness turned one-time viewers into repeat viewers, and repeat viewers into subscribers. But now that he has 150,000 subscribers and a new video can get upwards of 1000 comments, he has to leave more and more questions unanswered.


When i graduated from college in 1990, jobs like “web designer,” “SEO strategist,” and “data scientist” didn’t exist.

When my co-founders and I launched LinkedIn thirteen years later in 2003, none of our users had jobs like “social media manager,” “TikTok influencer,” or “virtual reality architect.”


In my opinion, ignoring Al is like ignoring blogging in the late 1990s, or social media circa 2004, or mobile in 2007.


That means embracing Al in the same spirit that we once embraced the Model T and the Apple II.


The core insight of The Startup of You is that your career is like a startup, and you are its CEO. Being a startup CEO is a lonely and stressful job, full of uncertainty and paralyzing dilemmas. One way to make the job a bit more manageable is to build a personal board of advisors who can help and support you, but even that approach has its limitations. Your board members, being human, won’t always be available to talk. GPT-4’s suggestions won’t always be immediately helpful, but they will provide something to react to and build on, which is better than being stuck with a blank page.


Al is better suited for tasks that require large amounts of data and information to be analyzed and organized quickly and accurately. Al can quickly search through large data sets and identify patterns and trends, as well as draw conclusions from the data. Al can also automate low-level legal tasks, such as document review and contract review, which can be tedious and time consuming for human lawyers. Al may also be able to predict the outcome of a legal case, given certain facts and evidence. Al is also better suited for tasks that require precise, technical analysis, such as patent searches or financial analysis.


In my personal experience, the actual practice of law is more boring, tedious, and detail-obsessed than the typical television program or movie makes it seem. There’s a lot less delivering eloquent speeches to a jury, and a lot more reading thousands of pages of poorly written documents. Al would be terrible at the former, but is very good at the latter.


I’m not sure any human attorneys ever enjoy reading thousands of nearly identical contracts, however well they are paid!


Management consultants are often called upon to draw conclusions from large amounts of data, to benchmark and share best practices, and to create plans for starting and growing new business units. A lot of these tasks feel like they might be well-suited to AI.


Principle 1: Treat GPT-4 like an undergrad research assistant, not an omniscient oracle.


It also has many of the other drawbacks of a human research assistant: it’s not an expert, its grasp of any particular subject is fairly shallow, and it gets things wrong. In fact, when it’s wrong, it’s worse than a human research assistant, since a human will often have the good sense to warn you when he or she isn’t certain about the quality of their output.


As the director, you’re working with an actor to elicit the best performance. You’re not telling them, “Bend your neck fifteen degrees, and then after 2.5 seconds, look at the person across from you.” Instead, you’re asking them to make the audience feel a certain way: “Convince us you’re in love.”


In most of our work, we’re taught to plan in advance and avoid making mistakes. That’s because implementing a plan is costly in terms of time and other resources-there’s a reason a carpenter’s adage is to measure twice and cut once.

But what if implementing a plan was cheaper and faster than thinking about it?

That’s the confounding paradox of GPT-4 and LLMs. In far less time than it takes to debate a plan, GPT-4 can simply generate a complete response for you to review. If you don’t like the response, you can throw it away and generate another one, or you might just generate three variations to give you more potential choices.


We thought we were getting all-knowing, supremely logical, and infallibly even-tempered automata; instead, we get a simulation of that smart but sometimes sketchy dude we’ve been arguing with on Reddit?!


In certain contexts, an LLM’s ability to generate nonfactual information can be tremendously useful. (In humans we call it “imagination,” and it’s one of the qualities we most prize in ourselves.)


Every day, we’re inundated by information. Much of it arrives without much context. A lot of it is extremely complex. Some is produced in a real effort to inform, clarify, and make sense of the world. Some is designed to flatter or shame us into buying something, or fill us with doubt, or intentionally mislead us, or just distract us.

Yet there are lots of settled truths (and mostly settled truths) out there, too, and I believe that having convenient access to this information has enormous value.


Whatever amount of error it contains, I think it’s safe to say we’ve learned to live with it, and we now regularly depend on Wikipedia to help navigate and make sense of the world.

The site’s success is perhaps explained with a perspective that founder Jimmy Wales has often expressed about Wikipedia: “It’s good enough knowledge, depending on what your purpose is.”


Good distribution is far more important to a product’s success than good service-or even the product’s initial quality. Without distribution, few people will even have a chance to try what you made.

As a free online resource, Wikipedia was much more accessible than any previous encyclopedia, including earlier digital ones like Microsoft Encarta. Web distribution also freed Wikipedia from printing and shipping costs, which meant it could cover so many topics that it was soon making print publications like Encyclopedia Britannica look decidedly, well, non-encyclopedic - skimpy, even. Finally, digital distribution meant that Wikipedia could publish edits and updates instantly and incessantly, transforming inaccuracy into a fairly correctable problem.


So when we hear urgent calls to regulate LLMs like we regulate many other industries, we should remember that today’s car and drug regulations did not arise fully fledged. They were informed by years of actual usage, and the associated, measurable problems and negative outcomes.


It sometimes seems as if our species’ mission statement is to create tools that transcend time, space, and matter, all in a quest to express the full power of our imaginations. That’s why were constantly inventing new technologies, including painting, writing, film, television, video games, and the metaverse: to help us “hallucinate” more vividly, and to share the results more easily.


The use of tools would have required the development of fine motor skills, which generally requires complex brain function. Additionally, the act of using tools to obtain food would have necessitated strategic thinking, problem solving, and planning-all of which are cognitive abilities that would have been beneficial to early humans in a variety of ways.

It’s difficult to say definitively, but if early humans were using tools to improve their chances of survival, they may have been able to devote more time and resources to social interaction, which could have led to the development of more complex communication systems and even the formation of larger social groups.


In higher education, we make the distinction between the arts and the sciences, typically characterizing the former as the most essential form of human expression- the realm where we explore fundamental emotions like love, courage, anger, and mercy. But which arts aren’t enabled, amplified, and extended by pencils, printing presses, paint, pianos, microphones, computers, and other artifacts of technology?


Above everything else, a car is a technology that prioritizes effortless and extremely powerful mobility-and it ends up having much different impacts on the world than, say, a horse-drawn carriage or a bicycle.