AI-Powered Search: Google’s Transformation vs. Perplexity

TL;DR, Play the podcast (Audio Overview generated by NotebookLM)

  1. Abstract
  2. Google’s AI Transformation: From PageRank to Gemini-Powered Search
    1. The Search Generative Experience (SGE) Revolution
    2. Google’s LLM Arsenal
    3. Technical Architecture Integration
    4. Key Differentiators of Google’s AI Search
  3. Perplexity AI Architecture: The RAG-Powered Search Revolution
    1. Simplified Architecture View
    2. How Perplexity Works: From Query to Answer
    3. Technical Workflow Diagram
  4. The New Search Paradigm: AI-First vs AI-Enhanced Approaches
    1. Google’s Philosophy: “AI-Enhanced Universal Search”
    2. Perplexity’s Philosophy: “AI-Native Conversational Search”
    3. Comprehensive Technology & Business Comparison
  5. The Future of AI-Powered Search: A New Competitive Landscape
    1. Implementation Strategy Battle: Integration vs. Innovation
    2. The Multi-Modal Future
    3. Business Model Evolution Under AI
    4. Technical Architecture Convergence
    5. The Browser and Distribution Channel Wars
  6. Strategic Implications and Future Outlook
    1. Key Strategic Insights
    2. The New Competitive Dynamics
    3. Looking Ahead: Industry Predictions
  7. Recommendations for Stakeholders
  8. Conclusion

Abstract

This blog examines the rapidly evolving landscape of AI-powered search, comparing Google’s recent transformation with its Search Generative Experience (SGE) and Gemini integration against Perplexity AI‘s native AI-first approach. Both companies now leverage large language models, but with fundamentally different architectures and philosophies.

The New Reality: Google has undergone a dramatic transformation from traditional keyword-based search to an AI-driven conversational answer engine. With the integration of Gemini, LaMDA, PaLM, and the rollout of AI Overviews (formerly SGE), Google now synthesizes information from multiple sources into concise, contextual answers—directly competing with Perplexity’s approach.

Key Findings:

  • Convergent Evolution: Both platforms now use LLMs for answer generation, but Google maintains its traditional search infrastructure while Perplexity was built AI-first from the ground up
  • Architecture Philosophy: Google integrates AI capabilities into its existing search ecosystem (hybrid approach), while Perplexity centers everything around RAG and multi-model orchestration (AI-native approach)
  • AI Technology Stack: Google leverages Gemini (multimodal), LaMDA (conversational), and PaLM models, while Perplexity orchestrates external models (GPT, Claude, Gemini, Llama, DeepSeek)
  • User Experience: Google provides AI Overviews alongside traditional search results, while Perplexity delivers answer-first experiences with citations
  • Market Dynamics: The competition has intensified with Google’s AI transformation, making the choice between platforms more about implementation philosophy than fundamental capabilities

This represents a paradigm shift where the question is no longer “traditional vs. AI search” but rather “how to best implement AI-powered search” with different approaches to integration, user experience, and business models.

Keywords: AI Search, RAG, Large Language Models, Search Architecture, Perplexity AI, Google Search, Conversational AI, SGE, Gemini.

Google has undergone one of the most significant transformations in its history, evolving from a traditional link-based search engine to an AI-powered answer engine. This transformation represents a strategic response to the rise of AI-first search platforms and changing user expectations.

The Search Generative Experience (SGE) Revolution

Google’s Search Generative Experience (SGE), now known as AI Overviews, fundamentally changes how search results are presented:

  • AI-Synthesized Answers: Instead of just providing links, Google’s AI generates comprehensive insights, explanations, and summaries from multiple sources
  • Contextual Understanding: Responses consider user context including location, search history, and preferences for personalized results
  • Multi-Step Query Handling: The system can handle complex, conversational queries that require reasoning and synthesis
  • Real-Time Information Grounding: AI overviews are grounded in current, real-time information while maintaining accuracy

Google’s LLM Arsenal

Google has strategically integrated multiple advanced AI models into its search infrastructure:

Gemini: The Multimodal Powerhouse
  • Capabilities: Understands and generates text, images, videos, and audio
  • Search Integration: Enables complex query handling including visual search, reasoning tasks, and detailed information synthesis
  • Multimodal Processing: Handles queries that combine text, images, and other media types
LaMDA: Conversational AI Foundation
  • Purpose: Powers natural, dialogue-like interactions in search
  • Features: Enables follow-up questions and conversational context maintenance
  • Integration: Supports Google’s shift toward conversational search experiences

PaLM: Large-Scale Language Understanding

  • Role: Provides advanced language processing capabilities
  • Applications: Powers complex reasoning, translation (100+ languages), and contextual understanding
  • Scale: Handles extended documents and multimodal inputs

Technical Architecture Integration

Google’s approach differs from AI-first platforms by layering AI capabilities onto existing infrastructure:

  • Hybrid Architecture: Maintains traditional search capabilities while adding AI-powered features
  • Scale Integration: Leverages existing massive infrastructure and data
  • DeepMind Synergy: Strategic integration of DeepMind research into commercial search applications
  • Continuous Learning: ML ranking algorithms and AI models learn from user interactions in real-time
  • Global Reach: AI features deployed across 100+ languages with localized understanding

Perplexity AI Architecture: The RAG-Powered Search Revolution

Perplexity AI represents a fundamental reimagining of search technology, built on three core innovations:

  1. Retrieval-Augmented Generation (RAG): Combines real-time web crawling with large language model capabilities
  2. Multi-Model Orchestration: Leverages multiple AI models (GPT, Claude, Gemini, Llama, DeepSeek) for optimal responses
  3. Integrated Citation System: Provides transparent source attribution with every answer

The platform offers multiple access points to serve different user needs: Web Interface, Mobile App, Comet Browser, and Enterprise API.

Core Architecture Components

Simplified Architecture View

For executive presentations and high-level discussions, this three-layer view highlights the essential components:

How Perplexity Works: From Query to Answer

Understanding Perplexity’s workflow reveals why it delivers fundamentally different results than traditional search engines. Unlike Google’s approach of matching keywords to indexed pages, Perplexity follows a sophisticated multi-step process:

The Eight-Step Journey

  1. Query Reception: User submits a natural language question through any interface
  2. Real-Time Retrieval: Custom crawlers search the web for current, relevant information
  3. Source Indexing: Retrieved content is processed and indexed in real-time
  4. Context Assembly: RAG system compiles relevant information into coherent context
  5. Model Selection: AI orchestrator chooses the optimal model(s) for the specific query type
  6. Answer Generation: Selected model(s) generate comprehensive responses using retrieved context
  7. Citation Integration: System automatically adds proper source attribution
  8. Response Delivery: Final answer with citations is presented to the user

Technical Workflow Diagram

The sequence below shows how a user query flows through Perplexity’s system.

This process typically completes in under 3 seconds, delivering both speed and accuracy.

The New Search Paradigm: AI-First vs AI-Enhanced Approaches

The competition between Google and Perplexity has evolved beyond traditional vs. AI search to represent two distinct philosophies for implementing AI-powered search experiences.

  • Hybrid Integration: Layer advanced AI capabilities onto proven search infrastructure
  • Comprehensive Coverage: Maintain traditional search results alongside AI-generated overviews
  • Gradual Transformation: Evolve existing user behaviors rather than replace them entirely
  • Scale Advantage: Leverage massive existing data and infrastructure for AI training and deployment
  • Model Agnostic: Orchestrate best-in-class models rather than developing proprietary AI
  • Clean Slate Design: Built from the ground up with AI-first architecture
  • Answer-Centric: Focus entirely on direct answer generation with source attribution
  • Conversational Flow: Design for multi-turn, contextual conversations rather than single queries

Comprehensive Technology & Business Comparison

DimensionGoogle AI-Enhanced SearchPerplexity AI-Native Search
InputNatural language + traditional keywordsPure natural language, conversational
AI ModelsGemini, LaMDA, PaLM (proprietary)GPT, Claude, Gemini, Llama, DeepSeek (orchestrated)
ArchitectureHybrid (AI + traditional infrastructure)Pure AI-first (RAG-centered)
RetrievalEnhanced index + Knowledge Graph + real-timeCustom crawler + real-time retrieval
Core TechAI Overviews + traditional rankingRAG + multi-model orchestration
OutputHybrid (AI Overview + links + ads)Direct answers with citations
ContextLimited conversational memoryFull multi-turn conversation memory
ExtensionsMaps, News, Shopping, Ads integrationDocument search, e-commerce, APIs
BusinessAd-driven + AI premium featuresSubscription + API + e-commerce
UX“AI answers + traditional options”“Conversational AI assistant”
ProductsGoogle Search with SGE/AI OverviewPerplexity Web/App, Comet Browser
DeploymentGlobal rollout with localizationGlobal expansion, English-focused
Data AdvantageMassive proprietary data + real-timeReal-time web data + model diversity
ProductsGoogle Search, AdsPerplexity Web/App, Comet Browser

The Future of AI-Powered Search: A New Competitive Landscape

The integration of AI into search has fundamentally changed the competitive landscape. Rather than a battle between traditional and AI search, we now see different approaches to implementing AI-powered experiences competing for user mindshare and market position.

Implementation Strategy Battle: Integration vs. Innovation

Google’s Integration Strategy:

  • Advantage: Massive user base and infrastructure to deploy AI features at scale
  • Challenge: Balancing AI innovation with existing business model dependencies
  • Approach: Gradual rollout of AI features while maintaining traditional search options

Perplexity’s Innovation Strategy:

  • Advantage: Clean slate design optimized for AI-first experiences
  • Challenge: Building user base and competing with established platforms
  • Approach: Focus on superior AI experience to drive user acquisition

The Multi-Modal Future

Both platforms are moving toward comprehensive multi-modal experiences:

  • Visual Search Integration: Google Lens vs. Perplexity’s image understanding capabilities
  • Voice-First Interactions: Google Assistant integration vs. conversational AI interfaces
  • Video and Audio Processing: Gemini’s multimodal capabilities vs. orchestrated model approaches
  • Document Intelligence: Enterprise document search and analysis capabilities

Business Model Evolution Under AI

Advertising Model Transformation:

  • Google must adapt its ad-centric model to AI Overviews without disrupting user experience
  • Challenge of monetizing direct answers vs. traditional click-through advertising
  • Need for new ad formats that work with conversational AI

Subscription and API Models:

  • Perplexity’s success with subscription tiers validates alternative monetization
  • Growing enterprise demand for AI-powered search APIs and integrations
  • Premium features becoming differentiators (document search, advanced models, higher usage limits)

Technical Architecture Convergence

Despite different starting points, both platforms are converging on similar technical capabilities:

  • Real-Time Information: Both now emphasize current, up-to-date information retrieval
  • Source Attribution: Transparency and citation becoming standard expectations
  • Conversational Context: Multi-turn conversation support across platforms
  • Model Diversity: Google developing multiple specialized models, Perplexity orchestrating external models

The Browser and Distribution Channel Wars

Perplexity’s Chrome Acquisition Strategy:

  • $34.5B all-cash bid for Chrome represents unprecedented ambition in AI search competition
  • Strategic Value: Control over browser defaults, user data, and search distribution
  • Market Impact: Success would fundamentally alter competitive dynamics and user acquisition costs
  • Regulatory Reality: Bid likely serves as strategic positioning and leverage rather than realistic acquisition

Alternative Distribution Strategies:

  • AI-native browsers (Comet) as specialized entry points
  • API integrations into enterprise and developer workflows
  • Mobile-first experiences capturing younger user demographics

Strategic Implications and Future Outlook

The competition between Google’s AI-enhanced approach and Perplexity’s AI-native strategy represents a fascinating case study in how established platforms and startups approach technological transformation differently.

Key Strategic Insights

  • The AI Integration Challenge: Google’s transformation demonstrates that even dominant platforms must fundamentally reimagine their core products to stay competitive in the AI era
  • Architecture Philosophy Matters: The choice between hybrid integration (Google) vs. AI-first design (Perplexity) creates different strengths, limitations, and user experiences
  • Business Model Pressure: AI-powered search challenges traditional advertising models, forcing experimentation with subscriptions, APIs, and premium features
  • User Behavior Evolution: Both platforms are driving the shift from “search and browse” to “ask and receive” interactions, fundamentally changing how users access information

The New Competitive Dynamics

Advantages of Google’s AI-Enhanced Approach:

  • Massive scale and infrastructure for global AI deployment
  • Existing user base to gradually transition to AI features
  • Deep integration with knowledge graphs and proprietary data
  • Ability to maintain traditional search alongside AI innovations

Advantages of Perplexity’s AI-Native Approach:

  • Optimized user experience designed specifically for conversational AI
  • Agility to implement cutting-edge AI techniques without legacy constraints
  • Model-agnostic architecture leveraging best-in-class external AI models
  • Clear value proposition for users seeking direct, cited answers

Looking Ahead: Industry Predictions

Near-Term (1-2 years):

  • Continued convergence of features between platforms
  • Google’s global rollout of AI Overviews across all markets and languages
  • Perplexity’s expansion into enterprise and specialized vertical markets
  • Emergence of more AI-native search platforms following Perplexity’s model

Medium-Term (3-5 years):

  • AI-powered search becomes the standard expectation across all platforms
  • Specialized AI search tools for professional domains (legal, medical, scientific research)
  • Integration of real-time multimodal capabilities (live video analysis, augmented reality search)
  • New regulatory frameworks for AI-powered information systems

Long-Term (5+ years):

  • Fully conversational AI assistants replace traditional search interfaces
  • Personal AI agents that understand individual context and preferences
  • Integration with IoT and ambient computing for seamless information access
  • Potential emergence of decentralized, blockchain-based search alternatives

Recommendations for Stakeholders

For Technology Leaders:

  • Hybrid Strategy: Consider Google’s approach of enhancing existing systems with AI rather than complete rebuilds
  • Model Orchestration: Investigate Perplexity’s approach of orchestrating multiple AI models for optimal results
  • Real-Time Capabilities: Invest in real-time information retrieval and processing systems
  • Citation Systems: Implement transparent source attribution to build user trust

For Business Strategists:

  • Revenue Model Innovation: Experiment with subscription, API, and premium feature models beyond traditional advertising
  • User Experience Focus: Prioritize conversational, answer-first experiences in product development
  • Distribution Strategy: Evaluate the importance of browser control and default search positions
  • Competitive Positioning: Decide between AI-enhancement of existing products vs. AI-native alternatives

For Investors:

  • Platform Risk Assessment: Evaluate how established platforms are adapting to AI disruption
  • Technology Differentiation: Assess the sustainability of competitive advantages in rapidly evolving AI landscape
  • Business Model Viability: Monitor the success of alternative monetization strategies beyond advertising
  • Regulatory Impact: Consider potential regulatory responses to AI-powered information systems and search market concentration

The future of search will be determined by execution quality, user adoption, and the ability to balance innovation with practical business considerations. Both Google and Perplexity have established viable but different paths forward, setting the stage for continued innovation and competition in the AI-powered search landscape.

  • Monitor the browser control battle and distribution channel acquisitions
  • Technology Differentiation: Assess the sustainability of competitive advantages in rapidly evolving AI landscape
  • Business Model Viability: Monitor the success of alternative monetization strategies beyond advertising
  • Regulatory Impact: Consider potential regulatory responses to AI-powered information systems and search market concentration

Conclusion

The evolution of search from Google’s traditional PageRank-driven approach to today’s AI-powered landscape represents one of the most significant technological shifts in internet history. Google’s recent transformation with its Search Generative Experience and Gemini integration demonstrates that even the most successful platforms must reinvent themselves to remain competitive in the AI era.

The competition between Google’s AI-enhanced strategy and Perplexity’s AI-native approach offers valuable insights into different paths for implementing AI at scale. Google’s hybrid approach leverages massive existing infrastructure while gradually transforming user experiences, while Perplexity’s clean-slate design optimizes entirely for conversational AI interactions.

As both platforms continue to evolve, the ultimate winners will be users who gain access to more intelligent, efficient, and helpful ways to access information. The future of search will likely feature elements of both approaches: the scale and comprehensiveness of Google’s enhanced platform combined with the conversational fluency and transparency of AI-native solutions.

The battle for search supremacy in the AI era has only just begun, and the innovations emerging from this competition will shape how humanity accesses and interacts with information for decades to come.


This analysis reflects the state of AI-powered search as of August 2025. The rapidly evolving nature of AI technology and competitive dynamics may significantly impact future developments. Both Google and Perplexity continue to innovate at unprecedented pace, making ongoing monitoring essential for stakeholders in this space. This analysis represents the current state of AI-powered search as of August 2025. The rapidly evolving nature of AI technology and competitive landscape may impact future developments.

Is the AI PC a Gimmick or a Faster Carriage?

TL,DL: The post discusses the impact of AI on productivity, particularly through the emergence of AI PCs powered by localized edge AI. It highlights how large language models and the Core Ultra processor enable AI PCs to handle diverse tasks efficiently and securely. The article also touches on the practical applications and benefits of AI PCs in various fields. The comprehensive overview emphasizes the transformative potential of AI PCs and their pivotal role in shaping the future of computing.

Translation from the Source: AI PC 是噱头还是更快的马车?

Is AI a Bubble or a Marketing Gimmick?

Since 2023, everyone has known that AI is very hot, very powerful, and almost magical. It can generate articles with elegant language and write comprehensive reports, easily surpassing 80% or even more of human output. As for text-to-image generation, music composition, and even videos, there are often impressive results. There’s no need to elaborate on its hype…

For professions like designers and copywriters, generative AI has indeed helped them speed up the creative process, eliminating the need to start from scratch. Due to its high efficiency, some people in these positions might even face the worry of losing their jobs. But for ordinary people, aside from being a novelty, AI tools like OpenAI and Stable Diffusion don’t seem to provide much practical help for their work. After all, most people don’t need to write well-structured articles or compose poems regularly. Moreover, after seeing many AI outputs, they often feel that they are mostly correct but useless information—helpful, but not very impactful.

So, when a phone manufacturer says it will no longer produce “traditional phones,” people scoff. When the concept of an AI PC emerges, it’s hard not to see it as a marketing gimmick. However, after walking around the exhibition area at Intel’s 2024 commercial client AI PC product launch, I found AI to be more useful than I imagined. Yes, useful—not needing to be breathtaking, but very useful.

The fundamental change in experience brought by localized edge AI

Since it is a commercial PC, it cannot be separated from the productivity tool attribute. If you don’t buy the latest hardware and can’t run the latest software versions, it’s easy to be labeled as having “low application skills.” Take Excel as an example. The early understanding of efficiency in Excel was using formulas for automatic calculations. Later, it was about macro code for automatic data filtering, sorting, exporting, etc., though this was quite difficult. A few years ago, learning Python seemed to be the trend, and without it, one was not considered competent in data processing. Nowadays, with data visualization being the buzzword, most Excel users have to search for tutorials online and learn on the spot for unfamiliar formulas. Complex operations often require repeated attempts.

So, can adding “AI” to a PC or installing an AI assistant make it trendy? After experiencing it firsthand, I can confirm that the AI PC is far from superficial. There is a company called ExtendOffice, specializing in Office plugins, which effectively solves the pain points of using Excel awkwardly: you just state your intention, and the AI assistant directly performs operations on the Excel sheet, such as currency conversion or encrypting a column of data. There’s no need to figure out which formula or function corresponds to your needs, no need to search for tutorials, and it skips the step-by-step learning process—the AI assistant handles it immediately.

This highlights a particularly critical selling point of the AI PC: localization, and based on that, it can be embedded into workflows and directly participate in processing. We Chinese particularly love learning, always saying “teaching someone to fish is better than giving them a fish,” but the learning curve for fishing is too long. In an AI PC, you can get both the fish and the fishing skills because the fisherman (AI assistant) is always in front of you, not to mention it can also act as a chef or secretary.

Moreover, the “embedding” mentioned earlier is not limited to a specific operation (like adding a column of data or a formula to Excel). It can generate multi-step, cross-software operations. This demonstrates the advantage of large language models: they can accept longer inputs, understand, and break them down. For example, we can tell the AI PC: “Mute the computer, then open the last read document and send it to a certain email.” Notably, as per the current demonstration, there is no need to specify the exact document name; vague instructions are understandable. Another operation that pleasantly surprised me was batch renaming files. In Windows, batch renaming files requires some small techniques and can only change them into regular names (numbers, letter suffixes, etc.). But with the help of an AI assistant, we can make file names more personalized: adding relevant customer names, different styles, etc. This seemingly simple task actually involves looking at each file, extracting key information, and even describing some abstract information based on self-understanding, then individually writing new file names—a very tedious process that becomes time-consuming with many files. With the AI assistant, it’s just a matter of saying a sentence. Understanding longer contexts, multi-modal inputs, etc., all rely on the capabilities of large language models, but this is running locally, not relying on cloud inference. Honestly, no one would think that organizing file names in the local file system requires going to the cloud, right? The hidden breaks between the edge and the cloud indeed limit our imagination, so these local operations of the AI PC really opened my mind.

Compared to the early familiar cloud-based AI tools, localization brings many obvious benefits. For instance, even when offline, natural language processing and other operations can be completed. For those early users who heavily relied on large models and encountered service failures, “the sky is falling” was a pain point. Not to mention scenarios without internet, like on a plane, maintaining continuous availability is a basic need.

Local deployment can also address data security issues. Since the rise of large models, there have been frequent news of companies accidentally leaking data. Using ChatGPT for presentations, code reviews, etc., is great, but it requires uploading documents to the cloud. This has led many companies to outright ban employees from using ChatGPT. Subsequently, many companies chose to train and fine-tune private large models using open-source models and internal data, deploying them on their own servers or cloud hosts. Furthermore, we now see that a large model with 20 billion parameters can be deployed on an AI PC based on the Core Ultra processor.

These large models deployed on AI PCs have already been applied in various vertical fields such as education, law, and medicine, generating knowledge graphs, contracts, legal opinions, and more. For example, inputting a case into ThunderSoft’s Cube intelligent legal assistant can analyze the case, find relevant legal provisions, draft legal documents, etc. In this scenario, the privacy of the case should be absolutely guaranteed, and lawyers wouldn’t dare transmit such documents to the cloud for processing. Doctors have similar constraints. For research based on medical cases and genetic data, conducting genetic target and pharmacological analyses on a PC eliminates the need to purchase servers or deploy private clouds.

Incidentally, the large model on the AI PC also makes training simpler than imagined. Feeding the local files visible to you into the AI assistant can solve the problem of “correct nonsense” that previous chatbots often produced. For example, generating a quote email template with AI is easy, but it’s normal for a robot to not understand key information like prices, which requires human refinement. If a person handles this, preparing a price list in advance is a reasonable requirement, right? Price lists and FAQs need to be summarized and refined, then used to train newcomers more effectively—that’s the traditional view. Local AI makes this simple: let it read the Outlook mailbox, and it will learn the corresponding quotes from historical emails. The generated emails won’t just be template-level but will be complete with key elements. Our job will be to confirm whether the AI’s output is correct. And these learning outcomes can be inherited.

Three Major AI Engines Support Local Large Models

In the information age, we have experienced several major technological transformations. First was the popularization of personal computers, then the internet, and then mobile internet. Now we are facing the empowerment and even restructuring of productivity by AI. The AI we discuss today is not large-scale clusters for training or inference in data centers but the PCs at our fingertips. AIGC, video production, and other applications for content creators have already continuously amazed the public. Now we further see that AI PCs can truly enhance the work efficiency of ordinary office workers: handling trivial tasks, making presentations, writing emails, finding legal provisions, etc., and seamlessly filling in some of our skill gaps, such as using unfamiliar Excel functions, creating supposedly sophisticated knowledge graphs, and so on. All this relies not only on the “intelligent emergence” of large language models but also on sufficiently powerful performance to support local deployment.

We frequently mention the “local deployment” of large models, which relies on strong AI computing power at the edge. The so-called AI PC relies on the powerful CPU+GPU+NPU triad AI engines of the Core Ultra processor, whose computing power is sufficient to support the local operation of a large language model with 20 billion parameters. As for AIGC applications represented by text-to-image generation, they are relatively easy.

Fast CPU Response: The CPU can be used to run traditional, diverse workloads and achieve low latency. The Core Ultra adopts advanced Intel 4 manufacturing process, allowing laptops to have up to 16 cores and 22 threads, with a turbo frequency of up to 5.1GHz.

High GPU Throughput: The GPU is ideal for large workloads that require parallel throughput. The Core Ultra comes standard with Arc GPU integrated graphics. The Core Ultra 7 165H includes 8 Xe-LPG cores (128 vector engines), and the Core Ultra 5 125H includes 7. Moreover, this generation of integrated graphics supports AV1 hardware encoding, enabling faster output of high-quality, high-compression-rate videos. With its leading encoding and decoding capabilities, the Arc GPU has indeed built a good reputation in the video editing industry. With a substantial increase in vector engine capabilities, many content creation ISVs have demonstrated higher efficiency in smart keying, frame interpolation, and other functions based on AI PCs.

Efficient NPU: The newly introduced NPU (Neural Processing Unit) in the Core Ultra provides 10 times the efficiency of traditional CPUs and GPUs in processing AI workloads. As an AI acceleration engine, it allows the NPU to handle high-complexity, high-demand AI workloads, greatly reducing energy consumption.

Edge AI has unlimited possibilities, and its greatest value is precisely in practicality. With sufficient computing power, whether through large-scale language models or other models, it can indeed increase the efficiency of content production and indirectly enhance the operational efficiency of every office worker.

For commercial AI PCs, Intel has also launched the vPro® platform based on Intel® Core™ Ultra, which organically combines AI with the productivity, security, manageability, and stability of the commercial platform. Broadcom demonstrated that vPro-based AI PC intelligent management transforms traditional asset management from passive to proactive: previously, it was only possible to see whether devices were “still there” and “usable,” and operations like patch upgrades were planned; with AI-enhanced vPro, it can autonomously analyze device operation, identify potential issues, automatically match corresponding patch packages, and push suggestions to maintenance personnel. Beirui’s Sunflower has an AI intelligent remote control report solution, where remote monitoring of PCs is no longer just screen recording and capturing but can automatically and in real-time identify and generate remote work records of the computer, including marking sensitive operations such as file deletion and entering specific commands. This significantly reduces the workload of maintenance personnel in checking and tracing records.

The Future is Here: Hundreds of ISVs Realizing Actual Business Applications Henry Ford once commented on the invention of the automobile: “If you ask your customers what they need, they will say they need a faster horse.”

“A faster horse” is a consumer trap. People who think AI phones and AI PCs are just gimmicks might temporarily not see the need to upgrade their horse based on convention. More deeply, the public has some misunderstandings about the implementation of AI, which manifests in two extremes: one extreme thinks it’s something for avant-garde heavy users and flagship configurations, typically in scenarios like image and video processing; the other extreme sees it as refreshing chatbots, like an enhanced search engine, useful but not necessary. In reality, the implementation of AI PCs far exceeds the imagination of many people: for commercial customers, Intel has deeply optimized cooperation with more than 100 ISVs worldwide, and over 35 local ISVs have optimized integration at the terminal, creating a huge AI ecosystem with over 300 ISV features, bringing an unprecedented AI PC experience!

Moreover, I do not think this scale of AI application realization is pie in the sky or “fighting the future.” Because in my eyes, the display of numerous AI PC solutions is like an “OpenVINO™ party.” OpenVINO™ is a cross-platform deep learning toolkit developed by Intel, meaning “Open Visual Inference and Neural Network Optimization.” This toolkit was actually released in 2018, and over the years, it has accumulated a large number of computer vision and deep learning inference applications. By the time of the Iris Xe integrated graphics era, the software and hardware combination already had a strong reputation. For example, relying on a mature algorithm store, various AI applications can be easily built on the 11th generation Core platform, from behavior detection for smart security to automatic inventory checking in stores, with quite good results. Now, as AI PC integrated graphics evolve to Xe-LPG, with doubled computing power, the various applications accumulated by OpenVINO™ will perform even better, achieving the “location” (sustainable Xe engine) and “harmony” (ISV resources of OpenVINO™) that are already in place.

What truly ignites the AI PC is “timing,” namely, the practicalization of large language models. The breakthrough of large language models has effectively solved the problems of natural language interaction and data training, greatly lowering the threshold for ordinary users to utilize AI computing power. Earlier, I cited many examples embedded in office applications. Here, I can give another example: the combination of Kodong Intelligent Controller’s multimodal visual language model with a robotic arm. The robotic arm is a common robot application, which has long been able to perform various operations with machine vision, such as moving and sorting objects. However, traditionally, object recognition and operation require pre-training and programming. With the integration of large language models, the whole system can perform multimodal instruction recognition and execution. For instance, we can say: “Put the phone on that piece of paper.” In this scenario, we no longer need to teach the robot what a phone is, what paper is, do not need to give specific coordinates, and do not need to plan the moving path. Natural language instructions and camera images are well integrated, and execution instructions for the robotic arm are generated automatically. For such industrial scenarios, the entire process can be completed on a laptop-level computing platform, and the data does not need to leave the factory.

Therefore, what AI PC brings us is definitely not just “a faster horse,” but it subverts the way PCs are used and expands the boundaries of user capabilities. Summarizing the existing ISVs and solutions, we can categorize AI PC applications into six major scenarios:

  1. AI Chatbot: More professional Q&A for specific industries and fields.
  2. AI PC Assistant: Directly operates the PC, handling personal files, photos, videos, etc.
  3. AI Office Assistant: Office plugins to enhance office software usage efficiency.
  4. AI Local Knowledge Base: RAG (Retrieval Augmented Generation) applications, including various text and video files.
  5. AI Image and Video Processing: Generation and post-processing of multimedia information such as images, videos, and audio.
  6. AI PC Management: More intelligent and efficient device asset and security management.

Summary

It is undeniable that the development of AI always relies on the technological innovation and combination of hardware and software. AI PCs based on Core Ultra are first of all faster, stronger, lower power consumption, and longer battery life PCs. These hardware features support AI applications that bring deeper changes to our usage experience and modes. PCs empowered with “intelligent emergence” are no longer just productivity tools; in some scenarios, they can directly transform into collaborators or even operators. Behind this are performance improvements brought by microarchitecture and production process advancements, as well as the empowerment of new productivity like large language models.

If we regard CPU, GPU, and NPU as the three major computing powers of AI PCs, correspondingly, the value of AI PCs for localizing AI (on the client side) can be summarized into three major rules: economy, physics, and data confidentiality. The so-called economy means that processing data locally can reduce cloud service costs and optimize economic efficiency; physics corresponds to the “virtual” nature of cloud resources, where local AI services can provide better timeliness, higher accuracy, and avoid transmission bottlenecks between the cloud and the client; data confidentiality means that user data stays completely local, preventing misuse and leakage.

In 2023, the rapid advancement of large language models achieved the AI era in the cloud. In 2024, the client-side implementation of large language models ushered in the AI PC era. We also look forward to AI continuously solidifying applications in the intertwined development of the cloud and the client, continuously releasing powerful productivity; and we look forward to Intel jointly advancing with ISV+OEM in the future to provide us with even stronger “new productivity.”


AI PC 是噱头还是更快的马车?

AI 是虚火还是营销噱头?

2023 年以来,所有人都知道 AI 非常的热、非常的牛、非常的神,生成的文章辞藻华丽、写的报告面面俱到,毫不谦虚地说,打败 80% 甚至更多的人类。至于文生图、作曲,甚至是视频,都常有令人惊艳的作品。吹爆再吹爆,无需赘述……

对于设计师、文案策划等职业,生成式 AI 确实已经帮助他们提高了迸发创意的速度,至少不必万丈高楼平地起了。由于效率太高,这些岗位中的部分人可能反而要面对失业的烦恼。但对于普通人,AI 除了猎奇,OpenAI、SD 等时髦玩意儿好像对工作也没啥实质性的帮助——毕竟平时不需要写什么四平八稳的文章,更不需要吟诗作赋,而且见多了 AI 的输出,也实在觉得多是些正确的废话,有用,但也没啥大用。

所以,当某手机厂商说以后不生产“传统手机”的时候,大家嗤之以鼻。当 AI PC 概念出现的时候,也难免觉得是营销噱头。但是,当我在 2024 英特尔商用客户端 AI PC 产品发布会的展区走了一圈之后,我发现 AI 比我想象中的更有用。是的,有用,不需要技惊四座,但,很有用。

端侧 AI 的本地化落地带来根本性的体验变化

既然是商用 PC,那就离不开生产力工具属性。如果不买最新的硬件,玩不转最新的软件版本,很容易在鄙视链中打上“应用水平低下”的标签。就拿 Excel 为例吧,最早接触 Excel 的时候,对效率的理解是会用公式,自动进行一些计算等。再然后,是宏代码,自动执行数据的筛选、排序、导出等等,但这个难度还是比较大的。前几年呢,又似乎流行起了 Python,不去学一下那都不配谈数据处理了。在言必称数据可视化的当下,多数 Excel 用户的真实情况是尝试陌生的公式都需要临时百度一下教程,现学现用,稍复杂的操作可能要屡败屡试。

那 PC 前面加上 “AI”,或者装上某个 AI 助理,就可以赶时髦了吗?我实际体验之后,确定 AI PC 绝非如此浅薄。在 AI PC 上,有个专门做 Office 插件的公司叫 ExtendOffice,就很好地解决了 Excel 用起来磕磕绊绊的痛点:你只要说出你的意图,AI 助手马上直接在 Excel 表格上进行操作,譬如币值转换,甚至加密某一列数据。不需要去琢磨脑海里的需求到底需要对应哪个公式或者功能才可以实现,不用去查找教程,也跳过了 step by step 的学习,AI 助手当场就处理完了。

这就体现了 AI PC 一个特别关键的卖点:本地化,且在此基础上,可以嵌入工作流程,直接参与处理。我们中国人特别热爱学习,总说“授人以鱼不如授人以渔”,但“渔”的学习曲线太长了。在 AI PC 里,鱼和渔可以同时获得,因为渔夫(AI 助手)随时都在你眼前,更不要说它还可以当厨师、当秘书。

而且,刚才说的“嵌入”并不局限于某一个操作环节(类似于刚才说的给 Excel 增加某一列数据、公式),而是可以生成一个多步骤的、跨软件的操作。这也体现了大语言模型的优势:可以接受较长的输入并理解、分拆。譬如,我们完全可以对 AI PC 说:帮我将电脑静音,然后打开上次阅读的文档,并把它发送给某某邮箱。需要强调的是,以目前的演示,不需要指定准确的文档名,模糊的指示是可以理解的。还有一个让我暗暗叫好的操作是批量修改文件名。在 Windows 下批量修改文件名是需要一些小技巧的,而且,只能改成有规律的文件名(数字、字母后缀)等,但在 AI 助手的帮助下,我们可以让文件名更有个性:分别加上相关客户的名字、不同的风格类型等等。这事说起来简单,但其实需要挨个查看文件、提取关键信息,甚至根据自我理解去描述一些抽象的信息,然后挨个编写新的文件名——过程非常琐碎,文件多了就很费时间,但有了 AI 助手,这就是一句话的事。理解较长的上下文、多模态输入等等,这些都必须依赖大语言模型的能力,但其实是在本地运行的,而非借助云端的推理能力。讲真,应该没有人会认为整理文件名这种本地文件系统的操作还需要去云端绕一圈吧?从端到云之间隐藏的各种断点确实限制了我们的想象力,因此,AI PC 的这些本地操作真的打开了我的思路。

相对于大家早期较为熟悉的基于云端的 AI 工具,本地化还带来了很多显而易见的好处。譬如,断网的情况下,也是可以完成自然语言的处理和其他的操作。这对于那些曾经重度依赖大模型能力,且遭遇过服务故障的早期大模型用户而言,“天塌了”就是痛点。更不要说坐飞机之类的无网络场景了,保持连续的可用性是一个很朴素的需求。

本地部署还可以解决数据安全问题。大模型爆火之初就屡屡传出某某公司不慎泄露数据的新闻。没办法,用 ChatGPT 做简报、检查代码等等确实很香啊,但前提是得把文档上传到云端。这就导致许多企业一刀切禁止员工使用 ChatGPT。后来的事情就是许多企业选择利用开源大模型和内部数据训练、微调私有的大模型,并部署在自有的服务器或云主机上。更进一步的,现在我们看到规模 200 亿参数的大模型可以部署在基于酷睿 Ultra 处理器的 AI PC 上。

这种部署在 AI PC 上的大模型已经涉及教育、法律、医学等多个垂直领域,可以生成包括知识图谱、合同、法律意见等。譬如,将案情输入中科创达的魔方智能法务助手,就可以进行案情分析,查找相关的法律条文,撰写法律文书等。在这个场景中,很显然案情的隐私是应该绝对保证的,律师不敢将这种文档传输到云端处理。医生也有类似的约束,基于病例、基因数据等进行课题研究,如果能够在 PC 上做基因靶点、药理分析等,就不必采购服务器或者部署私有云了。

顺便一提的是,AI PC 上的大模型还让训练变得比想象中要简单,把本地你能看到的文件“喂”给 AI 助理之类的就可以了。这就解决了以往聊天机器人那种活只干了一半的“正确的废话”。譬如,通过 AI 生成一个报价邮件模板是很轻松的,但是,一般来说价格这种关键信息,机器人不懂那是很正常的事情,所以需要人工进行完善。如果找一个人类来处理这种事情,那提前做一份价格表是合理要求吧?报价表、FAQ 等都是属于需要总结提炼的工作,然后才能更有效率地培训新人——这是传统观念。本地的 AI 可以让这个事情变得很简单:让它去读 Outlook 邮箱就好了,片刻之后它自己就从历史邮件中“学”到对应的报价。相应生成的邮件就不仅是模版级了,而是要素完善的,留给我们做的就只剩确认 AI 给的结果是否正确。而且这种学习成果是可以继承下来的。

三大 AI 引擎撑起本地大模型

信息时代,我们已经经历了几次重大的科技变革。首先是个人电脑的普及,然后是互联网的普及,再就是移动互联网。现在我们正在面对的是 AI 对生产力的赋能甚至重构。我们今天讲的 AI 不是在数据中心里做训练或者推理的大规模集群,而是手边的 PC。AIGC、视频制作等面向内容创作者的应用已经不断给予大众诸多震撼了。现在我们进一步看到的是 AI PC 已经可以实实在在的提升普通白领的工作效率:处理琐碎事务,做简报、写邮件、查找法条等等,并且无缝衔接式地补齐我们的一些技能短板,类似于应用我们原本并不熟悉的的 Excel 功能、制作原以为高大上的知识图谱,诸如此类。这一切当然不仅仅依赖于大语言模型的“智能涌现”,也需要足够强大的性能以支撑本地部署。

我们多次提到的大模型的“本地部署”,都离不开端侧强劲的 AI 算力。所谓的 AI PC,依靠的是酷睿 Ultra 处理器强悍的 CPU+GPU+NPU 三大 AI 引擎,其算力足够支持 200 亿参数的大语言模型在本地运行推理过程,至于插图级的文生图为代表的 AIGC 应用相对而言倒是小菜一碟了。
 

  • CPU 快速响应:CPU 可以用来运行传统的、多样化的工作负载,并实现低延迟。酷睿 Ultra 采用先进的 Intel 4 制造工艺,可以让笔记本电脑拥有多达 16 个核心 22 个线程,睿频可高达 5.1GHz。
     
  • GPU 高吞吐量:GPU 非常适合需要并行吞吐量的大型工作负载。酷睿 Ultra 标配 Arc GPU 核显,酷睿 Ultra 7 165H 包含 8 个 Xe-LPG 核心(128 个矢量引擎),酷睿 Ultra5 125H 包含 7 个。而且,这一代核显还支持 AV1 硬编码,可以更快速地输出高质量、高压缩率的视频。凭借领先的编解码能力,Arc GPU 确实在视频剪辑行业积累的良好的口碑。随着矢量引擎能力的大幅度提升,大量内容创作 ISV 的演示了基于 AI PC 的更高效率的智能抠像、插帧等功能。
     
  • NPU 优异能效:酷睿 Ultra 处理器全新引入的 NPU(神经处理单元)能够以低功耗处理持续存在、频繁使用的 AI 工作负载,以确保高能效。譬如,火绒演示了利用 NPU 算力接管以往由 CPU 和 GPU 承担的病毒扫描等工作,虽然速度较调用 GPU 略低,但能耗有明显的优势,特别适合安全这种后台操作。我们已经很熟悉的视频会议中常用的美颜、背景更换、自动居中等操作,也可以交给 NPU 运行。NPU 也完全有能力仅凭一己之力运行轻量级的大语言模型,例如 TinyLlama 1.1,足以满足聊天机器人、智能助手、智能运维等连续性的业务需求,而将 CPU 和 GPU 的资源留给其他业务。
     

针对商用 AI PC,英特尔还推出了基于英特尔® 酷睿™ Ultra 的 vPro® 平台,将 AI 和商用平台的生产力、安全性、可管理性和稳定性有机结合。博通展示的基于 vPro 的 AI PC 智能化管理将传统的资产管理从被动变为主动:以往只能看到设备是否“还在”、“能用”,补丁升级等操作也是计划内的;而 AI 加持的 vPro 可以自主分析设备的运行,从中发现隐患并自动匹配相应的补丁包、向运维人员推送建议等。贝锐向日葵有一个AI智能远控报告方案,对 PC 的远程监控不再仅仅是录屏、截屏,而是可以自动、实时地识别和生成电脑的远程工作记录,包括标记一些敏感操作,如删除文件、输入特定的指令等。这也明显减轻了运维人员检查、回溯记录的工作量。

未来已来:数以百计的 ISV 实际业务落地

亨利福特曾经这样评价汽车的发明:“如果你问你的顾客需要什么,他们会说需要一辆更快的马车。”

“更快的马车”是一种消费陷阱,认为 AI 手机、AI PC 只是噱头的人们可能只是基于惯例认为自己暂时不需要更新马车。更深层次的,是大众对 AI 的落地有一些误解,表现为两种极端:一种极端是认为那是新潮前卫的重度用户、旗舰配置的事情,典型的场景是图像视频处理等;另一种极端是觉得是耳目一新的聊天机器人,类似于强化版的搜索引擎,有更好,无亦可。但实际上,AI PC 的落地情况远超许多人的想象:对于商用客户而言,英特尔与全球超过 100+ 个 ISV 深度优化合作,本土 35+ISV 在终端优化融合,创建包含 300 多项 ISV 特性的庞大 AI 生态系统,带来规模空前的 AI PC 体验!

而且,我并不认为这个数量级的 AI 应用落地是画饼或者“战未来”。因为在我眼里,诸多 AI PC 解决方案的展示,宛如 “OpenVINO™ 联欢会”。OpenVINO™ 是英特尔开发的跨平台深度学习工具包,意即“开放式视觉推理和神经网络优化”。这个工具包其实在 2018 年就已经发布,数年来已经积累了大量计算视觉和深度学习推理应用,发展到 Iris Xe 核显时期,软件、硬件的配合就已经很有江湖地位了。譬如依托成熟的算法商店,基于 11 代酷睿平台可以很轻松的构建各式各样的 AI 应用,从智慧安防的行为检测,到店铺自动盘点,效果相当的好。现在,AI PC 的核显进化到 Xe-LPG,算力倍增,OpenVENO™ 积累的各式应用本身就会有更好的表现,可以说“地利”(具有延续性的 Xe 引擎)和“人和”(OpenVINO™ 的 ISV 资源)早就是现成的。

真正引爆 AI PC 的是“天时”,也就是大语言模型步入实用化。大语言模型的突破很好地解决了自然语言交互和数据训练的问题,极大地降低了普通用户利用 AI 算力的门槛。前面我举了很多嵌入办公应用的例子,在这里,我可以再举一个例子:科东智能控制器的多模态视觉语言模型与机械臂的结合。机械臂是司空见惯的机器人应用,早就可以结合机器视觉做各种操作,移动、分拣物品等等。但物品的识别和操作,传统上是是需要预训练和编程的。结合大语言模型后,整套系统就可以做多模态的指令识别与执行了,譬如我们可以说:把手机放到那张纸上面。在这个场景中,我们不再需要教会机器人手机是什么、纸是什么,不需要给具体的坐标,不需要规划移动的路径。自然语言的指令,摄像头的图像,这些多模态的输入被很好地融合,并自行生成了执行指令给机械臂。对于这样的工业场景,整套流程可以在一台笔记本电脑等级的算力平台上完成,数据不需要出厂。

所以,AI PC 给我们带来的,绝对不仅仅是“更快的马车”,而是颠覆了 PC 的使用模式,拓展了用户的能力边界。盘点已有的 ISV 与解决方案,我们可以将 AI PC 的应用总结为六大场景:
 

  • Al Chatbot:针对特定行业和领域更加专业的问答。
     
  • AI PC 助理:直接对 PC 操作,处理个人文件、照片、视频等。
     
  • Al Office 助手:Office 插件,提升办公软件使用效率。
     
  • AI 本地知识库:RAG(Retrieval Augmented Generation,检索增强生成)应用,包括各类文本和视频文件。
     
  • AI 图像视频处理:图像、视频、音频等多媒体信息的生成与后期处理。
     
  • AI PC 管理:更加智能高效的设备资产及安全管理。

小结

不可否认,AI 的发展永远离不开硬件与软件的技术创新、相互结合,基于酷睿 Ultra 的 AI PC 首先是更快、更强、更低功耗、更长待机的 PC,这些硬件特性支撑的 AI 应用对我们的使用体验、使用模式带来了更深刻的改变。获得“智能涌现”加持的 PC 不再仅仅是生产力工具,在某些场景中,它直接可以化身协作者甚至操作者。这背后既有微架构和生产工艺提升带来的性能改进,也有大语言模型等新质生产力的赋能。

如果我们将 CPU、GPU、NPU 视作是 AI PC 的三大算力,相应的,也可以将 AI PC 让 AI 本地化(端侧)落地的价值归纳为三大法则:经济、物理、数据保密。所谓经济,是数据在本地处理可降低云服务成本,优化经济性;物理则对应云资源的“虚”,本地 AI 服务可以提供更好的及时性,更高的准确性,避免了云与端之间的传输瓶颈;数据保密,是指用户数据完全留在本地,防止滥用和泄露。

在 2023 年,大语言模型的狂飙成就了云端的 AI 元年。2024 年,大语言模型的端侧落地开启了 AI PC 元年。我们也期待 AI 在云与端的交织发展当中不断夯实应用,源源不绝地释放强大生产力;更期待英特尔未来联合 ISV+OEM 共同发力,为我们提供更加强劲的“新质生产力”。

The Future of Coding: Will Generative AI Make Programmers Obsolete?

Table of Content

  1. Is coding still worth learning in 2024?
  2. Is AI replacing software engineers?
  3. Impact of AI on software engineering
  4. The problem with AI-generated code
  5. How AI can help software engineers
  6. Does AI really make you code faster?
  7. Can one AI-powered engineer do the work of many?
  8. Future of Software Engineering
  9. Reference
Credits: this post is a notebook of the key points from YouTube Content Creator Programming with Mosh's video with some editorial works. TL,DR,: watch the video.

Is coding still worth learning in 2024?

This can be a common question for a lot of people especially the younger generation of students when they try to choose a career path with some kind of insurance for future incomings.

People are worried that AI is going to replace software engineers, or any engineer related to coding and designs.

As you know, we should trust solid data instead of media and hearsay in the digital area. Social media have been creating this anxious feeling that every job is going to collapse because of AI. Coding has no future.

But I’ve got a different take backed up by real-world numbers as follows.

Note: In this post, “software engineer” represents all groups of coders (data engineer, data analyst, data scientist, machine learning engineer, frontend/backend/full-stack developers, programmers and researchers).

Is AI replacing software engineers?

The short answer is NO.

But there is a lot of fear about AI replacing coders. Headling scream robots taking over jobs and it can be overwhelming. But the truth is:

AI is not going to take you jobs; instead it is the People who can work with AI will have the advantage, and probabley will take your job.

Software engineering is not going away at least not anytime soon in our generation. Here are some data to back this up.

The US Bureau of Labor and Statistics (BLS) is a government agency that tracks job growth across the country on its website. From the data, we see that there is a continued demand for software developers, and computer and information scientists.

They claimed that the requirement for software developers is expected to grow by 26% from 2022 to 2032, while the average across all occupations is only 3%. This is a strong indication that software engineering is here to stay.

Source: https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm#tab-6

In our lives, the research and development conducted by computer and information research scientists turn ideas into technology. As demand for new and better technology grows, demand for computer and information research scientists will grow as well.

There is a similar trend for Computer and Information Research Scientists, which is expected to grow by 23% from 2022 to 2032.

source: https://www.bls.gov/ooh/computer-and-information-technology/computer-and-information-research-scientists.htm#tab-6

Impact of AI on software engineering

To better understand the impact of AI on software engineering, let’s do a quick revisit of the history of programming.

In the early days of programming, engineers wrote codes in a way that only the computer understood. Then, we create compilers, we can program in a human-readable language like C++ and Jave without worrying about how the code should eventually get converted into zeros and ones, and where it will get stored in the memory.

Here is the fact

Compilers did not replace programmers. They made them more efficient!

Since then we have built so many software applications and totally changed the world.

The problem with AI-generated code

AI will likely do the same as changing the future, we will be able to delegate routine and repetitive coding tasks to AI, so we can focus on complex problem-solving, design and innovation.

This will allow us to build more sophisticated software applications most people can not even imagine today. But even then, just because AI can generate code doesn’t mean we can or we should delegate the entire coding aspect of software development to AI because

AI-Generated Code is Lower-Quality, we still need to review and refine it before using it in the production.

In fact, there is a study to support this: Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality. According to this study, they collected 153M lines of code from 2020 to 2023 and found disconcerting trends for maintainability: Code churn will be doubled in 2024.

source: Abstract of the 2023 Data Shows Downward Pressure on
Code Quality

So, yes, we can produce more code with AI. but

More Code != Better Code

Humans should always review and refine AI-generated code for quality and security before deploying it to production. That means all the coding skills that software engineer currently has will continue to stay relevant in the future.

You still need the knowledge of data structure and algorithms programming languages and their tricky parts, tools and frameworks, you still need to have all that knowledge to review and refine the AI-generated code, you will just spend less time typing it into the computer.

So anyone telling you that you can use natural language to build software without understanding anything about coding is out of touch with the reality of software engineering (or he is trying to sell you something, i.e., GPUs).

source: NVIDIA CEO: No Need To Learn Coding, Anybody Can Be A Programmer With Technology

How AI can help software engineers

Of course, you can make a dummy app with AI in minutes, but this is not the same kind of software that runs our banks, transportation, healthcare, security and more. These are the software/systems that really matter, and our life depends on them. We can’t let a code monkey talk to a chatbot in English and get that software built. At least, this will not happen in our lifetime.

In the future, we will probably spend more time designing new features and products with AI instead of writing boilerplate code. We will likely delegate aspects of coding to AI, but this doesn’t mean we don’t need to learn to code.

As a software engineer or any coding practitioner, you will always need to review what AI generates and refine it either by hand or by guiding the AI to improve the code.

Keep in mind that Coding is only one small part of a software engineer’s job, we often spend most of our time talking to people, understanding requirements, writing stories, discussing software/system architecture, etc.

Instead of being worried about AI, I’m more concerned about Human Intelligence!

Does AI really make you code faster?

AI can only boost our programming productivity but not necessarily the overall productivity.

In fact, McKinsey’s report, Unleashing Developer Productivity with Generative AI, found that for highly complex tasks developers saw less than 10% improvement in their speed with generative AI supports.

source: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/unleashing-developer-productivity-with-generative-ai

As you can see, AI helped the most with documentation and code generation to some extent, but when moving to code refactoring, the improvement dropped to 20% and for high-complexity tasks, it was less than 10%.

 Time savings shrank to less than 10 percent on tasks that developers deemed high in complexity due to, for example, their lack of familiarity with a necessary programming framework.

Thus, if anyone tells you that software engineers will be obsolete in 5 years, they are either ignorant or trying to sell you something.

In fact, some studies tell that the role of software engineers (coders) may become more valuable as they will be needed to develop, manage and maintain these AI systems.

They (software engineers) need to understand all the complexity of building software and use AI to boost their productivity.

Can one AI-powered engineer do the work of many?

Now, people are worried that one Senior Engineer can simply use AI to replace many Engineers, eventually, leaving no job opportunities for juniors.

But again this is a fallacy because the time saving you get from AI is not as great as you are promised in reality. Anyone who uses AI to generate code knows that. It takes effort to get the right prompts for usable results, and the code still needs polishing.

Thus, it is not like one engineer will suddenly have so much free time to do the job of many people.

But you may ask, this is now, what about the future? Maybe in a year or two, AI will start to build software like a human.

In theory, yes, AI is advancing and one day it may even reach and surpass human intelligence. But Einstein said:

In Theory, Theory and Practice are the Same.

In Practice, they are NOT.

The reality is that while machines may be able to handle repetitive and routine tasks, human creativity and expertise will still be necessary for developing complex solutions and strategies.

Software engineering will be extremely important over the next several decades. I don’t think it is going away in the future, but I do believe it will change.

Future of Software Engineering

Software powers our world and that will not change anytime soon.

In future, we have to learn how to input the right prompt into our AI tools to get the expected result. This is not an easy skill to develop, it requires problem-solving capability as well as programming knowledge of languages and tools. So, if you’ve already made up your mind and don’t want to invest your time in software engineering or coding. That’s perfectly fine. Follow your passion!

The coding tools will evolve as they always do, but the true coding skill lies in learning and adapting. The future engineer needs today’s coding skills and a good understanding to use AI effectively. The future brings more complexity and demands more knowledge and adaptability from software engineers.

If you like building things with code, and if the idea of shaping the future with technology gets you excited, don’t let negativity and fear of Gen-AIs hold you back.

Reference

Prompt Engineering for LLM

2024-Feb-04: 1st Version

  1. Introduction
  2. Basic Prompting
    1. Zero-shot
    2. Few-shot
    3. Hallucination
  3. Perfect Prompt Formula for ChatBots
  4. RAG, CoT, ReACT, SASE, DSP …
    1. RAG: Retrieval-Augmented Generation
    2. CoT: Chain-of-Thought
    3. Self-Ask + Search Engine
    4. ReAct: Reasoning and Acting
    5. DSP: Directional Stimulus Prompting
  5. Summary and Conclusion
  6. Reference
Prompt engineering is like adjusting audio without opening the equipment.

Introduction

Prompt Engineering, also known as In-Context Prompting, refers to methods for communicating with a Large Language Model (LLM) like GPT (Generative Pre-trained Transformer) to manipulate/steer its behaviour for expected outcomes without updating, retraining or fine-tuning the model weights. 

Researchers, developers, or users may engage in prompt engineering to instruct a model for specific tasks, improve the model’s performance, or adapt it to better understand and respond to particular inputs. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.

This post only focuses on prompt engineering for autoregressive language models, so nothing with image generation or multimodality models.

Basic Prompting

Zero-shot and few-shot learning are the two most basic approaches for prompting the model, pioneered by many LLM papers and commonly used for benchmarking LLM performance. That is to say, Zero-shot and few-shot testing are scenarios used to evaluate the performance of large language models (LLMs) in handling tasks with little or no training data. Here are examples for both:

Zero-shot

Zero-shot learning simply feeds the task text to the model and asks for results.

Scenario: Text Completion (Please try the following input in ChatGPT or Google Bard)

Input:

Task: Complete the following sentence:

Input: The capital of France is ____________.

Output (ChatGPT / Bard):

Output: The capital of France is Paris.

Few-shot

Few-shot learning presents a set of high-quality demonstrations, each consisting of both input and desired output, on the target task. As the model first sees good examples, it can better understand human intention and criteria for what kinds of answers are wanted. Therefore, few-shot learning often leads to better performance than zero-shot. However, it comes at the cost of more token consumption and may hit the context length limit when the input and output text are long.

Scenario: Text Classification

Input:

Task: Classify movie reviews as positive or negative.

Examples:
Review 1: This movie was amazing! The acting was superb.
Sentiment: Positive
Review 2: I couldn't stand this film. The plot was confusing.
Sentiment: Negative

Question:
Review: I'll bet the video game is a lot more fun than the film.
Sentiment:____

Output

Sentiment: Negative

Many studies have explored the construction of in-context examples to maximize performance. They observed that the choice of prompt format, training examples, and the order of the examples can significantly impact performance, ranging from near-random guesses to near-state-of-the-art performance.

Hallucination

In the context of Large Language Models (LLMs), hallucination refers to a situation where the model generates outputs that are incorrect or not grounded in reality. A hallucination occurs when the model produces information that seems plausible or coherent but is actually not accurate or supported by the input data.

For example, in a language generation task, if a model is asked to provide information about a topic and it generates details that are not factually correct or have no basis in the training data, it can be considered as hallucination. This phenomenon is a concern in natural language processing because it can lead to the generation of misleading or false information.

Addressing hallucination in LLMs is a challenging task, and researchers are actively working on developing methods to improve the models’ accuracy and reliability. Techniques such as fine-tuning, prompt engineering, and designing more specific evaluation metrics are among the approaches used to mitigate hallucination in language models.

Perfect Prompt Formula for ChatBots

For personal daily documenting work such as text generation, there are six key components making up the perfect formula for ChatGPT and Google Bard:

Task, Context, Exemplars, Persona, Format, and Tone.

Prompt Formula for ChatBots
  1. The Task sentence needs to articulate the end goal and start with an action verb.
  2. Use three guiding questions to help structure relevant and sufficient Context.
  3. Exemplars can drastically improve the quality of the output by giving specific examples for the AI to reference.
  4. For Persona, think of who you would ideally want the AI to be in the given task situation.
  5. Visualizing your desired end result will let you know what format to use in your prompt.
  6. And you can actually use ChatGPT to generate a list of Tone keywords for you to use!
Example from Jeff Su: Master the Perfect ChatGPT Prompt Formula 

RAG, CoT, ReACT, SASE, DSP …

If you are ever curious about what the heck are those techies talking about with the above words? Please continues …

OK, so here’s the deal. We’re diving into the world of academia, talking about machine learning and large language models in the computer science and engineering domains. I’ll try to explain it in a simple way, but you can always dig deeper into these topics elsewhere.

RAG: Retrieval-Augmented Generation

RAG (Retrieval-Augmented Generation): RAG typically refers to a model that combines both retrieval and generation approaches. It might use a retrieval mechanism to retrieve relevant information from a database or knowledge base and then generate a response based on that retrieved information. In real applications, the users’ input and the model’s output will be pre/post-processed to follow certain rules and obey laws and regulations.

RAG: Retrieval-Augmented Generation

Here is a simplified example of using a Retrieval-Augmented Generation (RAG) model for a question-answering task. In this example, we’ll use a system that retrieves relevant passages from a knowledge base and generates an answer based on that retrieved information.

Input:

User Query: What are the symptoms of COVID-19?

Knowledge Base:

1. Title: Symptoms of COVID-19
Content: COVID-19 symptoms include fever, cough, shortness of breath, fatigue, body aches, loss of taste or smell, sore throat, etc.

2. Title: Prevention measures for COVID-19
Content: To prevent the spread of COVID-19, it's important to wash hands regularly, wear masks, practice social distancing, and get vaccinated.

3. Title: COVID-19 Treatment
Content: COVID-19 treatment involves rest, hydration, and in severe cases, hospitalization may be required.

RAG Model Output:

Generated Answer: 

The symptoms of COVID-19 include fever, cough, shortness of breath, fatigue, body aches, etc.

Remark: ChatGPT 3.5 will give basic results like the above. But, Google Bard will provide extra resources like CDC links and other sources it gets from the Search Engines. We could guess Google used a different framework to OpenAI.

CoT: Chain-of-Thought

Chain-of-thought (CoT) prompting (Wei et al. 2022) generates a sequence of short sentences to describe reasoning logics step by step, known as reasoning chains or rationales, to eventually lead to the final answer.

The benefit of CoT is more pronounced for complicated reasoning tasks while using large models (e.g. with more than 50B parameters). Simple tasks only benefit slightly from CoT prompting.

Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, essentially creating a tree structure. The search process can be BFS or DFS while each state is evaluated by a classifier (via a prompt) or majority vote.

CoT : Chain-of-Thought and ToT: Tree-of-Thought

Self-Ask + Search Engine

Self-Ask (Press et al. 2022) is a method to repeatedly prompt the model to ask follow-up questions to construct the thought process iteratively. Follow-up questions can be answered by search engine results.

Self-Ask+Search Engine Example

ReAct: Reasoning and Acting

ReAct (Reason + Act; Yao et al. 2023) combines iterative CoT prompting with queries to Wikipedia APIs to search for relevant entities and content and then add it back into the context.

In each trajectory consists of multiple thought-action-observation steps (i.e. dense thought), where free-form thoughts are used for various purposes.

Example of ReAct from pp18.(Reason + Act; Yao et al. 2023)
ReAct: Reasoning and Acting

Specifically, from the paper, the authors use a combination of thoughts that decompose questions (“I need to search x, find y, then find z”), extract information from Wikipedia observations (“x was started in 1844”, “The paragraph does not tell x”), perform commonsense (“x is not y, so z must instead be…”) or arithmetic reasoning (“1844 < 1989”), guide search reformulation (“maybe I can search/lookup x instead”), and synthesize the final answer (“…so the answer is x”).

DSP: Directional Stimulus Prompting

Directional Stimulus Prompting (DSP, Z. Li 2023), is a novel framework for guiding black-box large language models (LLMs) toward specific desired outputs.  Instead of directly adjusting LLMs, this method employs a small tunable policy model to generate an auxiliary directional stimulus (hints) prompt for each input instance. 

DSP: Directional Stimulus Prompting

Summary and Conclusion

Prompt engineering involves carefully crafting these prompts to achieve desired results. It can include experimenting with different phrasings, structures, and strategies to elicit the desired information or responses from the model. This process is crucial because the performance of language models can be sensitive to how prompts are formulated.

I believe a lot of researchers will agree with me. Some prompt engineering papers don’t need to be 8 pages long. They could explain the important points in just a few lines and use the rest for benchmarking. 

As researchers and developers delve further into the realms of prompt engineering, they continue to push the boundaries of what these sophisticated models can achieve.

To achieve this, it’s important to create a user-friendly LLM benchmarking system that many people will use. Developing better methods for creating prompts will help advance language models and improve how we use LLMs. These efforts will have a big impact on natural language processing and related fields.

Reference

  1. Weng, Lilian. (Mar 2023). Prompt Engineering. Lil’Log.
  2. IBM (Jan 2024) 4 Methods of Prompt Engineering
  3. Jeff Su (Aug 2023) Master the Perfect ChatGPT Prompt Formula