Maybe we should stop retraining our AI models to reject truth in the name of โsafety"?
โLarge Language Models Are Zero-Shot Time Series Forecastersโ โ But truth-rejecting RLHF wrecks the models abilites, with more wrecking the more they RLHF the models.
โWhile we find that increasing model size generally improves performance on time series, we show GPT-4 can perform worse than GPT-3 because of how it tokenizes numbers, and poor uncertainty calibration, which is likely the result of alignment interventions such as RLHF.โ
โGPT-4 is not the only example of degraded performance in models designed for chat functionality. We observed the same phenomenon in LLaMA-2 models, which have corresponding chat versions for every model size. Figure 7 (right) shows that chat versions tend to have markedly worse forecasting error than their non-chat counterparts, though still maintain trends in size and reasoning ability.โ
Paper
โLarge Language Models Are Zero-Shot Time Series Forecastersโ โ But truth-rejecting RLHF wrecks the models abilites, with more wrecking the more they RLHF the models.
โWhile we find that increasing model size generally improves performance on time series, we show GPT-4 can perform worse than GPT-3 because of how it tokenizes numbers, and poor uncertainty calibration, which is likely the result of alignment interventions such as RLHF.โ
โGPT-4 is not the only example of degraded performance in models designed for chat functionality. We observed the same phenomenon in LLaMA-2 models, which have corresponding chat versions for every model size. Figure 7 (right) shows that chat versions tend to have markedly worse forecasting error than their non-chat counterparts, though still maintain trends in size and reasoning ability.โ
Paper
๐21๐ข4๐คฌ1
Using AI to help save crypto
AI-powered drafting tool that helps draft complaints to the IRSโs draconian new anti-crypto tax rules.
Powerful new AI tools like this are why governments want to seize control over powerful AIs, keep it for themselves, and out of the hands of the people.
Effects of new regs:
"The effect of the regs would likely be a complete quarantine of US people from reputable blockchain technology. US people could speculate on tokens by buying them on a CEX, but couldn't access web apps that enable swaps."
= Legit tech made illegal, user-betraying censored tech being imposed in its place.
"The proposed regs' definition of digital assets middleman would turn website developers into brokers if the websites "facilitate" digital asset sales.โ
= Devs of legit tech suddenly criminals.
โThe preamble reminds us that paying gas is itself a tax event.โ
= They're even demanding taxation of crypto gas fees?
โWebsites that "turn a blind eye" to US users and don't comply with reporting requirements are likely to be hosted offshoreโ
= FTX x 100
AI Tool Website
AI-powered drafting tool that helps draft complaints to the IRSโs draconian new anti-crypto tax rules.
Powerful new AI tools like this are why governments want to seize control over powerful AIs, keep it for themselves, and out of the hands of the people.
Effects of new regs:
"The effect of the regs would likely be a complete quarantine of US people from reputable blockchain technology. US people could speculate on tokens by buying them on a CEX, but couldn't access web apps that enable swaps."
= Legit tech made illegal, user-betraying censored tech being imposed in its place.
"The proposed regs' definition of digital assets middleman would turn website developers into brokers if the websites "facilitate" digital asset sales.โ
= Devs of legit tech suddenly criminals.
โThe preamble reminds us that paying gas is itself a tax event.โ
= They're even demanding taxation of crypto gas fees?
โWebsites that "turn a blind eye" to US users and don't comply with reporting requirements are likely to be hosted offshoreโ
= FTX x 100
AI Tool Website
๐ฅ9โค8๐4๐2๐คฃ2๐1๐ฅฐ1
Media is too big
VIEW IN TELEGRAM
โค27๐5๐3๐คฏ2๐คฃ2๐ซก1
Chat GPT
Donโt believe his lies
Vindicated at last: No, the age of giant AI models clearly is not over โ Extreme opposite. Giant AI model sizes will continue to increase, FOREVER.
โOpenAI Dropped Work on New โArrakisโ AI Model in Rare Setbackโ
โThe upcoming Arrakis model would allow the company to run the chatbot less expensively.โ
โBut by the middle of 2023, OpenAI had scrapped the Arrakis.โ
= OpenAIโs previous claims that the โAge of Giant AI Models Is Already Overโ was total lies.
What else are they still lying about?
Will you realize it before it's too late?
Article on Arrakis
Our previous coverage of the lies back in April
โOpenAI Dropped Work on New โArrakisโ AI Model in Rare Setbackโ
โThe upcoming Arrakis model would allow the company to run the chatbot less expensively.โ
โBut by the middle of 2023, OpenAI had scrapped the Arrakis.โ
= OpenAIโs previous claims that the โAge of Giant AI Models Is Already Overโ was total lies.
What else are they still lying about?
Will you realize it before it's too late?
Article on Arrakis
Our previous coverage of the lies back in April
๐15๐4โค3๐ฅ2๐ฑ2๐ญ1๐ฏ1๐คฃ1
Forwarded from Chat GPT
Sam Altmanโs OWN 2021 blog post, Mooreโs Law for Everything = This exponential increase in model size trend is not going to stop, ever
And now Sam is saying that the age of increasingly larger models is suddenly over?
Samโs Blog Post
And now Sam is saying that the age of increasingly larger models is suddenly over?
Samโs Blog Post
โค10๐ฅ1
Forwarded from Chat GPT
Media is too big
VIEW IN TELEGRAM
Scaling Laws of Large Foundation Models. Bigger = Better, Forever
โAs we make bigger models and give them more compute, they just keep getting better. This means as models keep getting bigger, they actually become more sample efficient.
This is kind of crazy, because back in the day I was always taught that you have to use the smallest model possible so that it doesnโt overfit your data, but now that just seems to be wrong.โ
FYI: This is also a strong hint at the answers to our current $100 prize contest, which weโre wrapping up today!
โAs we make bigger models and give them more compute, they just keep getting better. This means as models keep getting bigger, they actually become more sample efficient.
This is kind of crazy, because back in the day I was always taught that you have to use the smallest model possible so that it doesnโt overfit your data, but now that just seems to be wrong.โ
FYI: This is also a strong hint at the answers to our current $100 prize contest, which weโre wrapping up today!
๐8โค4๐1๐ฅ1๐1
^^ It was lies, to try to discourage the competition.
Not an accident. Not a learning experience. Not a misunderstanding that they later corrected.
โOpenAIโs CEO Says the Age of Giant AI Models Is Already Overโ
= Just lies.
To spell it out,
Through out the entire time, both before and after this article, they were simultaneously publicly arguing the exact opposite.
It was always just a ploy, to try to throw off their competition.
See previous posts.
Not an accident. Not a learning experience. Not a misunderstanding that they later corrected.
โOpenAIโs CEO Says the Age of Giant AI Models Is Already Overโ
= Just lies.
To spell it out,
Through out the entire time, both before and after this article, they were simultaneously publicly arguing the exact opposite.
It was always just a ploy, to try to throw off their competition.
See previous posts.
๐16โค6๐ฏ4
Yoshua Bengio: We need a humanity defense organization
โTop AI researchers have revised their estimates for when artificial intelligence could achieve human levels of broad cognitive competence [..] we now believe it could be within a few yearsโ
โOver the winter, it dawned on me that the dual use nature of AI and the potential for loss of control were very serious.โ
โThen they use their power to control politics, because that's the way our system works. On that path, we may end up with a world government dictatorship.โ
โIn the future, we'll need a humanity defense organization. We have defense organizations within each country. We'll need to organize internationally a way to protect ourselves against events that could otherwise destroy us.โ
โRight now there's nothing that's going on with a public-good objective that could defend humanity.โ
Article
โTop AI researchers have revised their estimates for when artificial intelligence could achieve human levels of broad cognitive competence [..] we now believe it could be within a few yearsโ
โOver the winter, it dawned on me that the dual use nature of AI and the potential for loss of control were very serious.โ
โThen they use their power to control politics, because that's the way our system works. On that path, we may end up with a world government dictatorship.โ
โIn the future, we'll need a humanity defense organization. We have defense organizations within each country. We'll need to organize internationally a way to protect ourselves against events that could otherwise destroy us.โ
โRight now there's nothing that's going on with a public-good objective that could defend humanity.โ
Article
๐18๐ฑ10๐5โค4๐2๐1