Data Machina #193 – By Carlos

The craziest week in AI and the downsizing of large models. There is this meme in which Captain Haddock asks: What a week, huh? and: Tintin answers: Captain, it’s just Wednesday. That’s how I felt this week. Let me try to summarize everything that happened during the craziest week in artificial intelligence.

GPT-4 has been released. Social media has been flooded with AI experts reporting that GPT-4. argument and math, 4) has powerful OCR capabilities, and 5) improved coding. @taranjeet has summarized all the details and things people are doing with GPT-4 awesome-gpt4: repo:.

Some interesting, random GPT-4 stuff. @tyler decided to test GPT-4 on a particularly difficult “algorithmic” problem. He wrote about his findings Can GPT-4 *Actually* Be Coded?

@jacksonfall had a great AI business idea. He gave GPT-4 a $100 budget and told him to make as much money as possible. Jackson is simply following orders from GPT-4, who runs the business. Read this fascinating topic here.

@d_feldman tested GPT-3.5 and GPT-4”reasoning abilities“, with the same prompt, and he concluded that GPT-4 does have a global model.

Unsurprisingly, and rather quickly, @alexalbert came up with 3 clever quick hacks that first time jailbreak GPT-4see also this one, and: this one too

If you do not have access to GPT-4, you can try it for free at Replit

A whirlwind of new AI products and projects. Google and Microsoft are in an arms race to conquer office work with AI. Presented by MS 365 Copilot & Business Chat (demo here). I’m not a fan of MS, but tbh that’s an impressive display. Google also announced Generative AI for Google Workspace (demo here.)

Google and MS just killed hundreds of startups with those AI products. And I’m guessing that all these new AI office tools will empower some office workers as well as displace (displace?) many corporate middle managers, digital paper pushers, and information brokers.

It is very interesting that Google announced PaLM API and MakerSuiteAn affordable way to build prototyping and generative AI applications.

One of the latest Google Brain papers ReAct. synergizing reason and action with linguistic models ignited a new wave of thinking: rapid engineering and composability, extensibility and scalability of LLMs.

On that topic, @intrcnnctd wrote a great post to suggest and Amazing ease and efficiency of AI in the loop. Instead of asking GPT to just intelligently autocomplete your text, you prompt it to respond thought/action/observation junction.

In the same way, a few days ago, Microsoft was launched Semantic Kernel. An open source framework for integrating LLMs into your applications. SK supports: fast templating, chaining, vectorized memory, and intelligent scheduling. That sounds like a shot at LangChain.

Relentless, the @LangChain team announced LangChain + Zapier Natural Language Actions (NLA) which allows you to automate LLMs with 5k+ applications and 20k+ activities. This is the new edge of AI If this, then that. Real Automation of AI processes in natural language, not fake Robotic Process Automation (RPA) sold by major consultants.

Anthropic – the self-proclaimed leader safe, harmless and honest AI announced Claude, the next-generation AI assistant based on Anthropic’s research. You can read more about Claude’s features and request access here (which I have).

Assembly AI announced Conformer-1A SoTA speech recognition model that achieves near-human-level performance and stability across different data.

Finally, two more announcements. Midjourney v5 which improves almost every aspect of AI-generated imagesand: Steady Diffusion Reimaginea sort of img2img AI on steroids based on the new Clipdrop tool.

Reduction of large models. Frustrated by the lack of access to massive AI calculations and shut down large models Tech Titans, the AI ​​community is pushing massively to scale down large AI models. IMO caused it mostly.

  1. Meta AI open source LLaMA, small 65B param model which performs well

  2. It Dump all weights for the LLaMA model (see llama-dl repo)

  3. An introduction to Microsoft LoRA. Low degree adaptation of LLMs which dramatically reduces the number of parameters to be trained for downstream tasks

  4. excellent release of llama.cpp, LLaMA inference in pure C/C++ (repo)

  5. the crucial issue of Stanford Alpaca, a low-cost computational model adapted from the Meta AI LLaMA 7B model which performs similarly to Open AI text-davinci-003

Here are 7 projects that demonstrate this trend in scaling down large AI models:

  • Native Stanford Alpacatrain it and run it on your own machine

  • Meta AI LLaMa on your M1 MacHow to run it in a few easy steps

  • MiniLLM:Run modern LLMs on consumer-grade GPUs with just a small and easy-to-use codebase primarily in Python.

  • Alpaca-LoRARun a model that works like open AI text-davinci-003 On a Raspberry Pi

  • Int-4 LLaMa is not enough. A new way to reduce the RAM requirements of LLMs and easily build Python applications using faster LLM inference

  • alpaca.ccp:run a fast ChatGPT-like model on your device

  • cabrita-LoRA:we translated the Alpaca database into Portuguese, and running LoRA training, we achieved ChatGPT-like performance with only 16MB of LoRA weights.

But wait! Because just two days ago, Stanford CRFM-HAI published this post in the Alpaca GitHub repo.

We thank the community for their feedback on Stanford-Alpaca and for supporting our research. Our live demo is suspended until further notice.

I hear from my colleagues at @StandfordNLP that this is due to “ssecurity concerns and potential licensing issues…” Well, @pointnetwork posted yesterday anyway whale-alpaca or how to distill model weights from Stanford Alpaca. I love the internet.

Well, if you still don’t have enough craft, here are 2 suggestions for a lazy Sunday:

Good week!

  1. In conclusion GPT-4 Architecture

  2. ChatGPT: as a code

  3. Transformers.js: – Run transformers in the browser

  4. AI is eating the world…”Generative AI is… Not Enough.»

  5. What is temperature in NLP models?

  6. Large language models and SQL

  7. awesome-totally-open-chatgpt

  8. Training on LLMs Amazon SageMaker:Best practice

  9. Handbook: Dynamic prompt Average trip & Friends

  10. [Fascinating] Dr. Bing’s News Introduction to Neuroscience [32 vids]

Share Data Machina with your friends!

  1. A new way to speed up diffusers with PyTorch 2.0

  2. GenAI: Generative AI tooling for iPython

  3. Build a multilingual Movie RecSys application with Cohere Embeddings

  1. How I Enhanced My Brilliant Search App With GPT-3 Contributions

  2. Calibration of Opta’s xG (expected soccer goals) model

  3. shinyDeepDR – DL for cancer drug response with genomic data

  1. Google Vid2Seq. Visual-LM for multi-event video description

  2. LLMs & Embedding- Transformer Token Vectors points are not in space

  3. ViperGPT. Visual inference for reasoning with Python

  1. Automated Multistep Reasoning and Tool Use for LLMs

  2. Meet in the middle. A new preparation paradigm for LLMs

  3. An overview of recent developments in language models

  1. LLM Group:Robotics tasks and motion planning with LLMs

  2. Q&A with Chief Technologist @AWS Robotics

  3. A new approach to navigation of unmanned aerial vehicles in cluttered environments

  1. Car thefts are increasing. Is the TikTok challenge to blame?

  2. WaPo Interactive – What would your 4 day week look like?

  3. Create celebrities GapMinder: Dataviz with GPT-4 with 1 experience

  1. MLOps – Project Scaffold

  2. Unified MLOps @Expedia

  3. Automate ML model training and deployment with MLflow

  1. Adept – A new way to use computers with AI

  2. Cast: AI to cut Kubernetes costs in half

  3. Fairmatic – AI for Fleet Insurance

  1. COMICS TEXT+ Dataset for Comics Text Detection

  2. 25k IMDb Movie Dataset

  3. DCASE23 Anomalous Sound Detection Dataset

Did you like this post? Tell your friends about Data Machina. Thanks for reading.

Share

Advices. Suggestions: Feedback: letter to Carlos

Compiled by: @ds_ldn: at midnight.



Source link