Back

Who is better, ChatGPT or AlphaGo?

2023-10-26

I guess no one would disagree that the two most memorable landmarks in the AI industry over the past decade or so are AlphaGo’s defeat of Lee Sedol and Ke Jie in 2016 and chatGPT’s demonstration of general-purpose intelligence capabilities in 2022. Who is better, ChatGPT or AlphaGo ?And the interesting thing is that these two breakthroughs happen to correspond one to a vertical domain (AlphaGo) and one to a relatively generalized domain (chatGPT).

In terms of training methods AlphaGo did itself as a data source, which means that the oft-mentioned data flywheel effect became built-in for AlphaGo at a later stage.

In 21 days AlphaGo Zero reached the level of defeating its brother AlphaGo Master by playing against itself. Typical AI drives the data flywheel itself, relying on no one.

How does this help in determining who wins? It’s actually very critical.

To judge who wins, purely relying on technology does not work, because the biggest experience of the past decade is that if you judge the unknown field from the technical point of view, then no one can judge. So let’s change the paradigm.

AlphaGo is equivalent to taking something with clear enough boundaries to the top, experiencing the whole process of an artificial intelligence to the top. So we can summarize from this a key stage of AI to the top:

The three stages are: existing full data stage – self data generation stage – start data → intelligent flywheel, domain high point.

So the order in which the big models land will be roughly: purely digital, relatively low-cost can do everything in the field, high cost but still can do everything in the field.

The following is purely speculative, data → intelligent flywheel, domain intelligence high point in this phase may drop in arithmetic demand. This is just an observation, a non-technical conclusion. AlphaGo’s required arithmetic power is decreasing as its capabilities increase

https://www.deepmind.com/blog/alphago-zero-starting-from-scratch

ChatGPT or AlphaGo

Apparently the GPT is still on its way to continue to amplify, and from the latest leaks, GPT-4 contains a total of 1.8 trillion parameters in 120 layers, while GPT-3 only has about 175 billion parameters. At this trend GPT5 is unlikely to get smaller, or bigger.

ChatGPT and AlphaGo

This may be because the problem domain is too large, and by limiting the boundaries it may be possible to get to the optimization stage earlier. it may not be impossible for AI to optimize itself. From the point of view of doing things, the pure digital space of the big model for the vast majority of the opportunity is almost 0, vertical domain big model early will increase the loss, but increase the possibility of the final business model run through.

In the short term, there will definitely be a lot of vertical domain big models, because if you play Go ChatGPT always can’t beat AlphaGo. in the long run, the competition between generalized big models and vertical big models will always be there, but if generalized big models really cover every domain like AlphaGo, what does that mean?