GuruFocusGuruFocus

META: Is This The Next Gushing Stream Of Revenue?

10 דקות קריאה

Investment Thesis

Meta Platforms Inc.'s (NASDAQ: META) open-weight Llama strategy is meant to add revenues, not be a non-profit. By allowing for a stable foundation model, these efforts permit Meta to maximise builder dollars while limiting brand and safety risk. This ecosystem is laid against Meta's own offering: Meta AI's billion users build signals, add advertisements and ranking models, messaging bots monetize; higher advertisement yields, supplemental non-feed revenue, with costs per inferences dropping permits a transition from momentum to operating leverage.

Q2 Performance

1978523766040457216.png

Meta's Q2'25 shows extraordinary, ad-based resurgence: $47.5B in total revenues was up ~22% y/y (vs. $39.1B Q2'24) and up ~12% q/q with $46.6B (call it ~98% of revs), of advertising revenues. Revenues in "Other" areas of Family of Apps are still increasing by a healthy $583M and increasing at faster rates y/y than ads meaning WhatsApp paid messaging and verification must be starting to be perceived as whole given set of offerings, if at very low scale today. Reality Labs revenues are a bit lumpish ($370M) with clear holiday bumps (Q4) and little momentum q/q, while again operating losses try $4.5B again this quarter, showing a sustainable long payback risk. With all that, Family of Apps (FOA) operating income rose to $25.0B while group operating margin jumped to 43% (vs. 38% y/y), a clear sign that both ad productivity from AI stimuli and cost discipline was such as to more than offset RL outlapy. Seasonality is Real (Q4 is qtr. of highest sales), but year of year margin improvements indicate some structural, efficiency benefits not just seasonal lift. Important watchouts: live or die by ads, durability of cost where there invented improvements per ad, and can RL change losses w/o giving up current hardware /AI roadmaps. All in all, high quality, margin enhancing quarter.

Q3 Outlook

Q3'25 is likely to extend Q2's ad-led momentum, but with normal pre-holiday seasonality. Anticipate low-to-mid single-digit sequential revenue growth and high-teens y/o/y as AI-enhanced ranking/creative improvement enable price increases as impressions rise marginally. Other (WhatsApp paid messaging + Meta Verified) likely outgrows ads again off a small base, benefitting from further rollout of business-messaging flows. Threads adds incremental, brand-safe inventory, but monetization is still nascent. Reality Labs revenue should remain muted before Q4 hardware season, and operating losses roughly in a $4.5B run-rate continue to be the main drag on P&L. Mix and cost discipline should keep group operating margin in the low-40s despite depreciation from AI buildouts. Key swing factors: macro ad demand, Reels time-spent vs TikTok/YouTube, European policy frictions; none appear to derail the base cadence. The larger step-up comes in Q4 with holiday budgets, but December 16 chat-signal policy (ex-UK/EU/SK) sets up 2026 for an additional relevance-based pricing boost, assuming solid execution.

Guru Investments

As of Q2 2025, Meta is a highly-convicted core position for Tiger Global, which has about 7.53 million shares valued near $5.34 billion, or about 15.3% of its equity portfolio and its single largest holding; Tiger added ~0.9% to its stake in the quarter. Fisher Asset Management (Ken Fisher (Trades, Portfolio)) owns about 6.36 million shares valued near $4.5 billion, or ~1.6% portfolio weight within a broadly diversified mega-cap tech book, after trimming the position by ~1.5% in Q2 2025. Baillie Gifford (Trades, Portfolio) owns about 5.45 million shares valued near $3.86 billion, or ~2.8% of assets; it reduced the position by ~7.1% in Q2 2025. Together these core snapshots reflect three distinct stances: Tiger's concentrated conviction, Fisher's size but more measured exposure, and Baillie Gifford (Trades, Portfolio)'s growth-oriented but also actively-managed allocation. Dollar values fluctuate with share price of Meta and reflect managers' latest reported holdings.

The Brewing Revenue Potential

While Meta claims their form of open is open source, in fact it is open weights. By releasing provably strong base models under a community license combined with acceptable use policy, developers can easily fine-tune and deploy Llama without having to pay for pretraining, allowing Meta to maintain landmines and brand safety. This practically oriented, carefully strategized building with Llama low-friction while allowing Meta strategic control. That's why Institutional enterprises and large platforms can safely adopt Llama while the purists tangle over definitions.

The Nuanced Structure of Llama

Credibility is gasoline. Meta kept delivering credible families: Llama 3.1 with an emblematic 405B parameter model that standardized state-of-art but open-weights. Llama 3.2 following with vision models (11B/90B) and small (1B/3B) text models that operate at the edge. In 2025, the Llama 4 herd emerged with text-to-text & text-to-vision natively multimodal models with extensive context, demonstrating the cadence is not a sprint. The clear signal for developers is you can risk your stack by going Llama without getting stuck on last year's model.

Distribution creates momentum from gasoline. Since the weights are interchangeable weights, hyperscalers and data platforms are racing to promote Llama to first-class citizen status. Amazon Bedrock, Google Vertex AI, and Databricks are all deploying the family, creating a developer experience that is Llama wherever they are using Llama AWS Consoles, GCP Notebooks, Lakehouse pipelines no stack changes needed. Every point of integration is free distribution and product-market validation for Meta.

Standardization creates a compound ecosystem from momentum. Llama Stack provides a reference distribution that contains a standard reference toolkit, APIs, tuning, safety layers, and agentic apps across environments, which OEMs (Dell, AI Alliance members) and community utilize as a reference distribution. This reduces the toil of integration and scales the skills base and greatly simplifies third-party innovation for Meta to reintegrate.

The Commercialization of the Llama Apparatus

Portability maximizes the surface area for invention. Llama's small models and vision variants address edge, and low signaling cases right on time for more, and hapless use cases on Meta's increasingly available devices. An extreme use case is Space Llama, a tuned Llama 3.2, which was offline on the ISS, an extreme case of why open weights and small apps matter. The more distribution of Llama, the more novel workflows the community innovates for Meta to utilize internally.

Meta AI as a product has about ~1B monthly actives on the Family of Apps and provides an unmatched feedback flywheel of developing models and experience overall. Starting 16 December 2025 (not including UK/ EU/ SK), Meta AI interactions provide preference signal to content and advertisement personalization and essentially Creating Assistant usage into relevance signals. That is the fastest fast-list from open-weight strategy to monetized outputs, because it closely connects model enhancements and assistant performance to ad revenue.

Now, the business math. In the short term, monetization isn't from charging developers for Llama. Monetization happens from momentum in the ecosystem leading to Meta's core products being better and cheaper to run. The better models that are discovered and stress-tested in the wild contribute to ranking, creative tools, and integrity, improving conversion without increasing ad load. We can already see the macro impact happening in Q2 FY25, average price per ad increased by 9% year over year and impressions increased by 11%, resulting in a 43% operating margin on $47.5 billion in sales. Pricing power is what happens when relevance increases; advertisers will pay more for performance.

Open weights also enhance product velocity in areas which monetize outside of feed. Messaging commerce is the obvious standout: AI-based chat workflows, catalog creation, and business agents make paid messaging more productive on WhatsApp. Meta shoved much of that into Family-of-Apps other revenue', which grew by 50% year over year in Q2 FY25, explicitly because of WhatsApp paid messages and Meta Verified. The open-weight ecosystem helps in that it decreases build time providing agents, safety layers, and language availability meaning more merchants can onboard, more conversations convert, and more high-margin revenue comes in without increasing ad load.

The Cost Structure

On the cost side of profits, open weights and Llama Stack compress the engineering timeline from research to production; Engineering can ship updates more often and with less custom infrastructure. Combine with the company's AI infrastructure program: custom MTIA silicon and long-duration capacity, and you have decreasing costs-per-inference over time. Faster model updates arrive, and lower cost per-token; not too much to talk about ad serving, recommendations, safety, and assistant features all benefit from improved unit economics. Even when capex is high, deflationary costs-per-inference increase operating margins once features are scaled. Ubiquitous distribution means much more workloads will move to on-device over time as well, further reducing cloud costs for select tasks.

A less recognized monetization vector is open weights increase the probability that 3rd parties and enterprises select Llama as their default. This is valuable even if Meta doesn't charge usage fees, because standardizing on Llama increases the compatibility landscape with Meta's ad products, creative tools, and Assistant integrations. With the Llama API released this spring and now becoming available in large part across clouds, developers are ramping up faster; each new Llama-native workflow means that feed demand geometry is one more step to ensuring the demand then flows back into Meta's properties or becomes information that Meta can use to make Meta's properties rain signal value.

Financial Analysis

Profitability

Meta's profitability metrics significantly outperform the sector median across the board. Gross margin of 81.97% vs 53.78% reflects its asset-light ad/feeds model, not requiring material amounts of CAPEX. Operating efficiency is where the differentiation lies: EBIT margin of 43.37% vs 11.35% (?3.8) and EBITDA margin of 52.73% vs 20.17% (?2.6). Net income margin at 39.99% vs 4.29% (?9.3) is also an outlier due to scale, lower traffic acquisition costs, and disciplined spending. Levered FCF margin of 17.89% is also better than the sector's 10.40%, albeit closest gap and lower than Meta's five year average, reflecting the new heights of AI/datacenter capex. Compared to history, EBIT/EBITDA/net margins are 5-year averages, which suggests durable operating leverage. Caveats: sector metrics medians mask differences in mix; TTM strength reflects cyclical ad bounce and recent cuts that may normalize.

Growth

Meta's growth metrics significantly beat sector medians. Revenue growth YoY 19.37% vs 3.42%; forward 19.17% vs 3.51% (?4.55). Profit growth spreads are even larger: EBITDA YoY 28.64% vs 6.29% and forward 24.41% vs 5.66%; and EBIT YoY 28.72% vs 9.67% and forward 23.08% vs 10.84%. Compared to its own five-year averages, Meta is at or above trend (e.g. revenue FWD 19.17% vs 16.72%; EBITDA FWD 24.41% vs 17.39%). The B- Growth Grade probably incorporate normalization from post-cut bounce and the law of large numbers; forward rates are below trailing. Sector medians also mask mix effectsmany of peers are slower growth adtech/legacy mediaso the percentage beats may exaggerate structural edge. Regardless, on hold, Meta's high-teen/low-20s growth seems sustainable if ad cycle holds.

The Competitive Scene

Meta is confronted on four broad competitive fronts. First, attention platforms: TikTok and YouTube Shorts continue to capture Gen-Z culture and creator mindshare, putting watch-time and ad budgets at risk. Second, budget shift: retail media networks and connected TVespecially Amazon retail data, plus Prime Video adsretain spend closer to checkout. Third, interface pivot: foundation-model assistants with Google and OpenAI want to disrupt discovery into conversational search. Fourth, non-market headwinds: EU/DMA restrictions on personalization, brand-safety issues, and hard limits on compute and power.

Meta's counterpunch is a stack play. It continues to narrow the Reels gap with larger ranking models and generative creative, driving up ad yield without adding load. It shrinks the funnel on its own rails through WhatsApp paid messaging, click-to-chat formats, Shops, and merchant agentsmonetization that grows independently of feed inventory. Meta AI runs the gamut of the Family of Apps, and in allowed cases, returns direct intent signals into recommendations and ads; achieving higher conversion per impression drives auction prices without degrading user experience. Threads provides brand-safe, text based inventory that extends Instagram's target graph. On hardware, Ray-Ban "Display" glasses and Quest 3S at low price points expand assistant usage into hands-free capture and spatial computing, yielding proprietary signals that are more difficult for competitors to replicate. Behind the scenes, custom MTIA silicon, long-dated GPU supply deals, and a quickly expanding footprint of data-centers lower dollars-per-inference, increasing heavy AI investment into operating leverage.

Risk

Meta's AI ambitions run into hard limits: chips, power and places to locate dense clusters. Power demand for hyperscale AI is outpacing new generation and transmission capacity. This is causing interconnection queues to lengthen and prices to rise. Thermal densities from next generation GPU pods are straining legacy, air-cooled buildings. Water use, permits and local politics complicate site approvals, particularly in hot or dry locations. The near term implications are delays, higher $/inference on expensive, unreliable power, and capex overruns when cooling or grid upgrades don't happen in time.

Meta has three response vectors. First, they are dropping-in AI-oriented infrastructure: case liquid cooling, higher density racks, and water-capturing systems to improve performance per watt and extend the reach of viable climates. Second, they are procuring supply assurance: multi-year, large commitment external compute contract to fill gaps, while getting custom MTIA silicon in place to reduce energy per inference on ranking and assistant workloadslowering $/token and $/conversion options, when power gets tight, but power gets tight. Third, they co-invest at the grid edgededicated substations, long-term clean-energy contracts, and water restoration creditsto reduce risk around interconnections and minimize political friction.

While residual risks remain in the regional generation and transmission will still need build; residents will still have water sensitivity; and slip on either chip or cooling roadmaps will set off delays. But turning compute scarcity into an engineering and procurement at scale operations problem (through AI specific data centers, contracted capacity, custom chips, and site/build/grid-type upgrade) will increase the chances that power and siting become manageable constraints as opposed to an absolute limit on growth.

Recommendation

Meta's Llama strategy allows for openness as a means of both distribution and velocity, while still retaining some control. Steps towards credible cadence (Llama 3.1/3.2 ? Llama 4), ubiquitous hosting (Bedrock/Vertex/Databricks), and the compatibility of Llama Stack contribute increased adoption in the developer community while providing free R&D back into the ranking and creative, integrity and, assistant systems upgrades. Meta AI's 1B MAU and, due to the longevity of the Dec 16, 2025, date (ex-UK/EU/SK), chat-derived signals increase advertisement relevance, sustaining the higher auction price with zero ad load increase. Eventually outside of the feed, messaging commerce scale, while MTIA and long-dated capacity further keeps it below the $/inference. In short: they have increasing ad yield, growing non-ad revenue, and improving unit economics create multi-year earnings compounding ahead.