Broadcom Inc. (NASDAQ:AVGO) Q3 2023 Earnings Conference Call August 31, 2023 5:00 PM ET
Ji Yoo – Head of Investor Relations
Hock Tan – President and Chief Executive Officer
Kirsten Spears – Chief Financial Officer
Conference Call Participants
Vivek Arya – Bank of America Securities
Harlan Sur – JPMorgan
Ross Seymore – Deutsche Bank
Stacy Rasgon – Bernstein
Toshiya Hari – Goldman Sachs
Karl Ackerman – BNP Paribas
Harsh Kumar – Piper Sandler
Aaron Rakers – Wells Fargo
Matthew Ramsay – TD Cowen
Christopher Rolland – Susquehanna
Edward Snyder – Charter Equity Research
Welcome to the Broadcom Inc.’s Third Quarter Fiscal Year 2023 Financial Results Conference Call. At this time, for opening remarks and introductions, I would like to turn the call over to Ji Yoo, Head of Investor Relations of Broadcom Inc.
Thank you, operator, and good afternoon, everyone. Joining me on today’s call are Hock Tan, President and CEO; Kirsten Spears, Chief Financial Officer; and Charlie Kawwas, President, Semiconductor Solutions Group.
Broadcom distributed a press release and financial tables after the market closed, describing our financial performance for the third quarter fiscal year 2023. If you did not receive a copy, you may obtain the information from the Investors section of Broadcom’s website at broadcom.com.
This conference call is being webcast live and an audio replay of the call can be accessed for one year through the Investors section of Broadcom’s website. During the prepared comments, Hock and Kirsten will be providing details of our third quarter fiscal year 2023 results, guidance for our fourth quarter, as well as commentary regarding the business environment. We’ll take questions after the end of our prepared comments.
Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call.
In addition to US GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today’s press release. Comments made during today’s call will primarily refer to our non-GAAP financial results.
I’ll now turn the call over to Hock.
Thank you, Ji, and thank you, everyone, for joining us today. In our fiscal Q3 2023 consolidated net revenue, we achieved $8.9 billion, up 5% year-on-year. Semiconductor solutions revenue increased 5% year-on-year to $6.9 billion and infrastructure software grew 5% year-on-year to $1.9 billion. Hyperscale continued to grow double-digits year-on-year, but enterprise and telco spending moderated. Meanwhile, virtually defying gravity, our wireless business has remained stable.
Now generative AI investments are driving the continued strength in hyperscale spending for us. As you know, we supply a major hyperscale customer with custom AI compute engines. We are also supplying several hyperscalers, a portfolio of networking technologies as they scale up and scale out their AI clusters within their datacenters.
Now representing over $1 billion, this represented virtually all the growth in our infrastructure business in Q3 this year-on-year. So without the benefit of generative AI revenue in Q3, our semiconductor business was approximately flat year-on-year. In fact, since the start of the year, the fiscal year, our quarterly semiconductor revenue, excluding AI, has stabilized at around $6 billion. And as we had indicated to you a year ago, we expected a soft landing during fiscal ’23, and it appears this is exactly what is happening today.
Now let me give you more color on our end markets. As we go through this soft landing, we see though that our broad portfolio of products influencing the puts and takes across revenues within all our end markets except one, and that is networking.
And so, in my remarks today, we focus on networking, where generative AI has significant impact. Q3 networking revenue was $2.8 billion and was up 20% year-on-year in line with guidance, representing 40% of our semiconductor revenue.
As we indicated above, our switches and routers as well as our custom silicon AI engines, drove growth in this end market as they would deploy in scaling out AI clusters among the hyperscale.
We’ve always believed and more than ever now with AI networks that Ethernet is the best networking protocol to scale out AI clusters. Ethernet today already offers the low latency attributes for machine learning and AI, and Broadcom has the best technology today and tomorrow.
As a founding member of the Ultra Ethernet Consortium with other industry partners, we are driving Ethernet for scaling deployments in large language model networks. Importantly, we’re doing this based on open standards and a broad ecosystem.
Over the past quarter, we have already received substantial orders for our next-generation Tomahawk 5 switch and Jericho3-AI routers and plan to begin shipping these products over the next six months to several hyperscale customers.
This will replace the existing 400-gigabit networks with 800-gigabit connectivity. And beyond this, for the next-generation 1.6-terabit connectivity, we have already started development on the Tomahawk 6 switch, which has, among other things, 200G SerDes generating throughput capacity of over 100 terabit per second.
We are obviously excited that generative AI is pushing our engineers to develop cutting-edge technology in silicon technology that has never been developed before. We know the end of Moore’s Law has set limits on computing in silicon technology, but what we are developing today feels very much like a revival.
We invest in fundamental technologies to enable our hyperscale customers with the best hardware capabilities to scale generative AI. We invest in industry-leading 200G SerDes that can drive optics and even copper cables. We have differentiating technology that breaks current bottlenecks in high-bandwidth memory access.
We also have X high-speed and ultra-low power chip-to-chip connectivity to integrate multiple AI compute engines. We also have invested heavily in complex packaging technologies, migrating from today’s 2.5D to 3D, which enables large memory to be integrated with the AI compute engines and accelerators.
In sum, we have developed end-to-end platform of plug-and-play silicon IP that enables hyperscalers to develop and deploy their AI clusters in an extremely accelerated time-to-market. Not surprisingly, in Q4, moving on to Q4, continuing to be driven by generative AI deployments, we expect our networking revenue to accelerate in excess of 20% year-on-year. And this has been driven by the strength obviously in generative AI where we forecast to grow about 50% sequentially and almost two times year-on-year.
Moving to wireless. Q3 wireless revenue, $1.6 billion, represented 24% of semiconductor revenue, up 4% sequentially, flat year-on-year. The engagement with our North American customer continues to be deep and multi-year across WiFi, Bluetooth, Touch, RF Front-End and Inductive Power. So in Q4, consistent with the seasonal launch, we expect wireless revenue to grow over 20% sequentially and down low-single digit percent year-on year.
On our server storage connectivity revenue, it was $1.1 billion or 17% of semiconductor revenue and flat year-on-year. With a difficult year-on-year compare, we expect server storage connectivity revenue in Q4 to be down mid-teens percent year-on-year.
And moving on to broadband, following nine consecutive quarters of double-digit growth, revenue moderated to 1% year-on-year growth to $1.1 billion or 16% of semiconductor revenue. In Q4, despite increasing penetration of deployment of 10G-PON among telcos, we expect broadband revenue to decline high-single digits year-on-year.
Finally, Q3 industrial resales of $236 million declined 3% year-on year, reflecting weak demand in China. And in Q4, though we expect an improvement with industrial resales up low-single digit percentage year-on-year, reflecting largely seasonality.
So, in summary, Q3 semiconductor solutions revenue was up 5% year-on-year. And in Q4, we expect semiconductor revenue growth of low-to-mid single-digit percentage year-on-year. Sequentially, if we exclude generated AI, our semiconductor revenue will be flat.
Now turning to software. In Q3, infrastructure software revenue of $1.9 billion grew 5% year-on-year and represented 22% of total revenue. For core software, consolidated renewal rates averaged 117% over expiring contracts, and in our strategic accounts, we averaged 127%.
Within strategic accounts, annualized bookings of $408 million included $129 million or 32% of cross-selling of other portfolio products to these same core customers and over 90% of the renewal value represented recurring subscription and maintenance. Over the last 12 months, I should add, consolidated renewal rates averaged 115% over expiring contracts, and in our strategic accounts, we averaged 125%. Because of this, our ARR, the indicator of forward revenue, at the end of Q3 was $5.3 billion.
In Q4, we expect infrastructure software segment revenue to be up mid-single digit year-on-year. And on a consolidated basis for the Company, we are guiding Q4 revenue of $9.27 billion, up 4% year-on-year.
Before Kirsten tells you more about our financial performance for the quarter, let me provide a brief update on our pending acquisition of VMware. We have received legal merger clearance in Australia, Brazil, Canada, the European Union, Israel, South Africa, Taiwan and the United Kingdom and foreign investment control clearance in all necessary jurisdictions. In the US, the Hart-Scott-Rodino pre-merger waiting periods have expired, and there is no legal impediment to closing under US merger regulations.
We continue to work constructively with regulators in a few other jurisdiction and are in the advanced stages of the process towards obtaining the remaining required regulatory approvals, which we believe will be received before October 30th.
We continue to expect to close on October 30th, 2023. Now Broadcom is confident that the combination with VMware will enhance competition in the cloud and benefit enterprise customers by giving them more choice and control where they locate their workloads.
With that, let me turn the call over to Kirsten.
Thank you, Hock. Let me now provide additional detail on our financial performance. Consolidated revenue was $8.9 billion for the quarter, up 5% from a year ago. Gross margins were 75.1% of revenue in the quarter, in line with our expectations. Operating expenses were $1.1 billion, down 8% year-on-year. R&D of $913 million was also down 8% year-on-year on lower variable spending.
Operating income for the quarter was $5.5 billion and was up 6% from a year ago. Operating margin was 62% of revenue, up approximately 100 basis points year-on-year. Adjusted EBITDA was $5.8 billion or 65% of revenue. This figure excludes $122 million of depreciation. Now a review of the P&L for our two segments.
Revenue for our semiconductor solutions segment was $6.9 billion and represented 78% of total revenue in the quarter. This was up 5% year-on-year. Gross margins for our semiconductor solutions segment were approximately 70%, down 160 basis points year-on-year, driven primarily by product mix within our semiconductor end markets.
Operating expenses were $792 million in Q3, down 7% year-on-year. R&D was $707 million in the quarter, down 8% year-on-year. Q3 semiconductor operating margins were 59%. Moving to the P&L for our infrastructure software segment. Revenue for the infrastructure software segment was $1.9 billion, up year-on-year and represented 22% of revenue.
Gross margins for infrastructure software were 92% in the quarter, and operating expenses were $337 million in the quarter, down 10% year-over-year. Infrastructure software operating margin was 75% in Q3 and operating profit grew 13% year-on-year. Moving on to cash flow. Free cash flow in the quarter was $4.6 billion and represented 52% of revenues in Q3.
We spent $122 million on capital expenditures. Days sales outstanding were 30 days in the third quarter compared to 32 days in the second quarter. We ended the third quarter with inventory of $1.8 billion, down 2% sequentially. We continue to remain very disciplined on how we manage inventory across the ecosystem. We exited the quarter with 80 days of inventory on hand, down 86 days in Q2, down from excuse me, 86 days in Q2.
We ended the third quarter with $12.1 billion of cash and $39.3 billion of gross debt, of which $1.1 billion is short term. The weighted average coupon rate and years to maturity of our fixed rate debt is 3.61% and 9.7 years, respectively.
Turning to capital allocation. In the quarter, we paid stockholders $1.9 billion of cash dividends. Consistent with our commitment to return excess cash to shareholders, we repurchased 1.7 billion of our common stock and eliminated 460 million of our common stock for taxes due on vesting of employee equity, resulting in the repurchase and elimination of approximately 2.9 million AVGO shares.
The non-GAAP diluted share count in Q3 was $436 million. As of the end of Q3, $7.3 billion was remaining under the share repurchase authorization. We suspended our repurchase program in early August in accordance with SEC rules, which do not allow stock buybacks during the period in which VMware shareholders are electing between cash and stock consideration in our pending transaction to acquire VMware.
We expect the election period to end shortly before the anticipated closing of the transaction on October 30th, 2023. Excluding the impact of any share repurchases executed prior to the suspension, in Q4, we expect a non-GAAP diluted share count to be $435 million.
Based on current business trends and conditions, our guidance for the fourth quarter of fiscal 2023 is for consolidated revenues of $9.27 billion and adjusted EBITDA of approximately 65% of projected revenue.
In Q4, we expect gross margins to be down 80 basis points sequentially on product mix. We note that our guidance for Q4 does not include any contribution from VMware.
That concludes my prepared remarks. Operator, please open up the call for questions.
Thank you. [Operator Instructions] And our first question will come from the line of Vivek Arya with Bank of America. Your line is open.
Thanks for taking my question. Hock, my question has to do with your large AI ASIC compute offload contract. Is this something you feel you have the visibility to hold on to for the next several years or does this face some kind of annual competitive situation because you have a range of both domestic and Taiwan-based ASIC competitors, right, who think they can do it for cheaper. So I’m just curious, what is your visibility into maintaining this competitive win and then hopefully growing content in this over the next several years?
Love to answer your question, Vivek, but I will not, not directly anyway because we do not discuss our dealings and especially specific dealings of the nature you’re asking with respect to any particular customer. So that’s not appropriate. But I tell you this in broad generality, many ways you look over in our long-term arrangements — long-term agreements with our large North American OEM customer in wireless, very similar. We have a multiyear, very strategic engagement in usually more than one leading-edge technologies, which is what you need to create those kind of products, whether it’s in wireless or in this case, in generative AI, multiple technologies that goes in creating the products they want. And it’s multiple — it’s very strategic and it’s multiyear and engagement is very broad and deep.
Thank you, Hock.
Thank you. One moment for our next question. And that will come from the line of Harlan Sur with JPMorgan. Your line is open.
Good afternoon. Thanks for taking my question. Great to see the market diversification, market leadership and supply discipline, really sort of allowing the team to drive this sort of stable $6 billion per quarter run rate in a relatively weak macro environment. Looking at your customers’ demand profiles, your strong visibility, given your lead times, can the team continue to sustain a stable-ish sort of $6 billion revenue profile ex-AI over the next few quarters before macro trends potentially start to improve or do you anticipate enterprise and service provider trends to continue to soften beyond this quarter?
You’re asking me to guide beyond a quarter. I mean, hey, that’s beyond my pay grade, Harlan. But I know. But I just want to point out to you, we promised you a soft landing, late fiscal ’22, that likely ’23 will be a soft landing. And as you pointed out and what, to my remarks, that’s exactly what we are seeing.
Okay, perfect. Thank you.
Thank you. One moment for our next question. And that will come from the line of Ross Seymore with Deutsche Bank. Your line is open.
Hi, guys. Thanks for letting me ask a question. Hock, I want to stick with the networking segment and just get a little more color on the AI demand that you talked about growing so significantly sequentially in the fourth quarter. Is that mainly on the compute offload side or is the networking side contributing as well? Any color on that would be helpful.
They go hand — Ross, these things go very hand-in-hand. You don’t deploy those AI engines in these days for generative AI, particularly in onsies or twosies anymore. They come in large clusters or parts as hyperscalers will call — some hyperscalers will call it. And with that, it’s — you need a fabric, networking connectivity among thousands — tens of thousands today of those AI engines, whether it’s GPUs or some other customized AI silicon compute engine, the whole fabric with its AI engine represents literally the computer, the AI infrastructure. So it’s hand-in-hand that our numbers are driven very, very correlated to not just AI engines, whether we do the AI engines or somebody else, merchant silicon does those GPU engines. We supply a lot of the Ethernet networking solutions.
Thank you. One moment for our next question. And that will come from the line of Stacy Rasgon with Bernstein. Your line is open.
Hi, guys. Thanks for taking my question. If I take that sort of $6 billion non-AI run rate and I calculate what the AI is, I’m actually getting that 15% of semiconductor revenue that you mentioned last quarter. Do you still think it’s going to be 25% of revenue next year? And just how do I think about how you get to that number if that so I guess two questions. One is, is that number still 25% or is it higher or lower? And then how do I get it with the two moving pieces, the AI and the non-AI in order to get there? Because that percentage goes up if the non-AI goes down?
Well, there are a couple of assumptions one has to make, none of which I’m going to help you with, as you know, because I don’t guide next year. But except to tell you our AI revenue, as we indicated, has been accelerating — on an accelerating trajectory and no surprise. You guys hear that because deployment — it’s been extremely on an urgent basis and the demand we are seeing has been fairly strong, very strong. And so we see it accelerating through end of ’22, now accelerating and continued to accelerate through end of ’23 that we just indicated to you. And for fiscal ’24, we expect somewhat a similar accelerating trend. And so to answer your question, we have always indicated previously that for fiscal ’24, which is just, which is a forecast, we believe it will be over 25% of our revenue — of our semiconductor revenue over 25% of our semiconductor revenue.
Got it. Thank you very much.
Thank you. One moment for our next question. And that will come from the line of Toshiya Hari with Goldman Sachs. Your line is open.
Hi. Thank you so much for taking the question. I had one quick clarification then a question. On the clarification, Hock, can you talk about the supply environment, if that’s a constraining factor for your AI business? And if so, what kind of growth from a capacity perspective do you expect into fiscal ’24? And then my question is more on the non-AI side. As you guys talked about, you’ve done really well in managing your own inventory. But when you look across inventory levels for your customers or at your customers, it seems as though they’re sitting on quite a bit of inventory. So what’s your confidence level as it pertains to a potential inventory correction in your non-AI business, networking business going forward? Thank you.
Okay. Well, on the first question, you’re talking about supply chain. Well, these products for generative AI, whether they are networking or — and the customer engines, take long lead times. These are very, very leading-edge silicon products, both in terms of across the entire stack from the chip itself, to the packaging to even memory, the kind of HBM memory that is used in those chips. It’s all very long lead time and very cutting-edge products. So we’re trying to supply, like everybody else wants to have, within lead times. So by definition, you have constraints. And so do we, we have constraints. And we’re trying to work through the constraints, but it’s a lot of constraints. And you’ll never change as long as demand, orders flow in shorter than the lead time needed for production because the production of these parts are very long extended, and that’s the constraint we see as they come in faster than lead times along as orders come in. The answer on your second part, well, as far as we do see we are kind of, as I indicated, I call it soft lending. Another way of looking at it is that $6 billion approximately of non-AI related revenue per quarter is kind of bumping up and down on a plateau. Think of it that way. We — growth is kind of down to very little, but it’s still pretty stable up there. And so we have a range — as I indicated, too, we don’t have any one product in any one end market. We have multiple products. As you know, our portfolio is fairly broad, diversified and categorized into each end market with multiple different products. And each product runs on its own cadence sometimes on the timing on when customer wants it. And so you see bumping up and down different levels. But again it averages out over each quarter, as we pointed out, around $6 billion. And for now, we’re seeing that happen.
Great. Thank you.
Thank you. One moment for our next question. And that will come from the line of Karl Ackerman with BNP Paribas. Your line is open.
Thank you. Just on gross margins, you had a tough compare year-over-year for your semiconductor gross margins, which, of course, remains some of the best in semis, but is there a way to think or quantify about the headwind to gross margins this year from still elevated logistics costs and substrate costs as we think about the supply chain perhaps freeing up next year that perhaps could be a tailwind? Thank you.
You know, Karl, it is Hock. Let me take a stab at this question because it’s really a more holistic answer, and here’s what I mean. The impact to us on gross margin more than anything else, it’s not related to transactional supply chain issues. I’m sure they have in any particular point in time, but not as material and not as sustained in terms of impacting trends. What drives gross margin largely for us as a company is, frankly, a product mix, it’s a product mix. And as I mentioned earlier, we have a broad range of products even as we try to make order out of it from a viewpoint of communication and segment them classify them into multiple end markets. Within the end market, your products, and they all have different gross margins depending on the — on where they used and the criticality and various other aspects. So they’re different. So we have a real mixed bank. And what drives the trend in gross margin more than anything else is the pace of adoption of next-generation products in each product category, so think in that way. And you measure it across multiple products. And each time a new generation of product — of a particular product gets adopted, we get the opportunity to lift — uplift gross margin. And therefore, the rate of adoption matters, for instance, because for some products that changes gross margin every few years versus one that’s more extended one. You have different gross margin growth profile. And this is what is all tied to the most important variable. Now the more interesting thing to come down to us on a specifically your question is during ’21, ’22, in particular, with an up cycle in the semiconductor industry. We had a lot of lockdowns, change in behaviour, and a high level of demand for semiconductors. Or put it this way, a shortage of supply to demand. There was accelerated adoption of a lot of products, accelerated adoption. So we benefited, among other things, not just revenue, as I indicated, we benefited from gross margin expansion across the board as a higher percentage of our products out there gets adopted into the next-generation faster. We pass this. There is probably some slowdown in the adoption rate. And so gross margin expansion might actually not expand as fast. But it will work itself out over time. And I’ve always told you guys, the model this company has seen and is it’s empirical, but based on this underlying basic economics, it’s simply that when we have the broad range of products we have and each of them a different product life cycle of upgrading and next generation. We have seen over the years on a long-term basis, an expansion of gross margin on a consolidated basis for semiconductors that ranges from 50 to maybe 150 basis points on an annual basis. And that’s a long-term basis. In between, of course, you’ve seen numbers that go over to 200 basis points. That happened in 2022. And so now later, you have to offset that with years where gross margin expansion might be much less like 50. And I think with that the process, you will see us go through on an ongoing basis.
Thank you. One moment for our next question. That will come from the line of Harsh Kumar with Piper Sandler. Your line is open.
Yes, Hock. So congratulations on our textbook soft landing. I mean it’s perfectly executed. I had a question, I guess, more so on the takeoff timing. You’ve got a lead time that is about 1 year for your — most of your product lines. So I suppose you see visibility a year out. The question really is, are you starting to see growth in backlog about a year out? So in other words, we can assume that we’ll spend time at the bottom for about a year and then start to come back? Or is it happening before that time frame or maybe not even a year out? Just any color would be helpful. And then, as a clarification, Hock, is China approval needed for VMware or not needed?
Let’s start with lead times and asking me to predict when the up cycle would happen. It’s still too early for me to want to predict that, to be honest with you, because even though we have 50 weeks lead time, I have overlaid on it today. Nice, a lot of bookings related to generative AI. A decent amount of bookings related to wireless, too. So that kind of like buyers, what I’m looking at. So the answer to you — a very unsatisfactory, I know answer to your question is too early for me to tell, but we do have a decent amount of orders. All right.
And then on VMware, Hock?
Let me say this. I made those specific notes or remarks on regulatory approval. I ask that you think it through, read it through and let’s stop right there.
Okay. Fair enough. Thank you, Hock.
Thank you. And one moment for our next question. And that will come from the line of Aaron Rakers with Wells Fargo. Your line is open.
Yes. Thanks for taking the question and congrats also on the execution. I’m just curious, as I think about the Ethernet opportunity in AI fabric build-outs. Just Hock, any kind of updated thoughts now with the Ethernet Consortium that you’re part of thoughts as far as Ethernet relative to InfiniBand, particularly at the East West layer of these AI fabric build-outs with Tomahawk5, Jericho3 sounding like it’s going to start shipping in volume maybe in the next six months or so. Is that an inflection where you actually see Ethernet really start to take hold in the East-West traffic layer of these AI networks? Thank you.
That’s a very interesting question. And frankly, my personal view is InfiniBand has been the choice in the old — for years and years, generations of high — what we call — what we have called before high-performance computing, right? And high-performance computing was the old term for AI, by the way. So that was it because it was very dedicated application workloads and not a scale out as large language models drive today. We launched language models driving and most of — all this large language models are now being driven a lot by the hyperscale. Frankly, you see Ethernet getting huge amount of traction. And Ethernet is shipping. It’s not just getting traction to the future. It is shipping in many hyperscales. And — it coexist best way to describe it with InfiniBand. And it all depends on the workloads. It depends on the particular application that’s driving it. And at the end of the day, it also depends on, frankly, how large you want to scale your AI clusters. The larger you scale it, the more tendency you have to basically open it up to Ethernet.
Yeah, thank you.
Thank you. One moment for our next question. And that will come from the line of Matt Ramsay with TD Cowen. Your line is open.
Yes. Thank you very much. Good afternoon. Hock, I wanted to ask a question. I guess maybe a two-part question on your custom silicon business. Obviously, the large customer is ramping really, really nicely as you described. But there are many other sort of large hyperscale customers that are considering custom silicon, maybe catalyzed by Gen AI, maybe some not. But I wonder if the recent surge in Gen AI spending and enthusiasm has maybe widened the aperture of your appetite to take on big projects for other large customers in that arena? And secondly, any appetite at all to consider custom — switching routing products for customers or really a keen focus on merchant in those areas? Thank you.
Well, thank you. That’s a very insightful question. We only have one large customer in AI engines. We’re not a GPU company, and we’re not — we don’t do much compute, as you know, other than offload computing having said that, but it’s very customized. And I mean, what I’m trying to say is that I don’t want to mislead you guys. The fact that I may have engagement, and I’m not saying I do on a custom program should not at all be translated into your minds as oh, yes, this is a pipeline that will translate to revenue. Creating hardware infrastructure to run these large language models of hyperscalers is an extremely difficult and complex test and — for anyone to do. And the fact that even if there is any engagement, it does not translate easily to revenues. And so suffice it to say, I leave it at that. I have one hyperscale who we are shipping custom AI engines to today and leave it at that, if you don’t mind, okay? Now as far as customized switching, routing, sure. I mean, that happens. Most of the — many of the — those few OEMs, some OEMs who are supplying systems. Switch — systems, which are switches or routers and have their own custom solutions together with their own proprietary network operating system. That’s been the model for the last 20, 30 years. And today, 70% of the market is on merchant silicon. Not yet, I won’t say for not the network operating system, but certainly for the silicon is merchant silicon. So the message here is, there’s some advantages to doing a merchant solution here then to trying to do a custom solution as behavior or performance over the last 20 years have shown.
Thanks, Hock. Appreciate it.
Thank you. One moment for our next question. And that will come from the line of Christopher Rolland with Susquehanna. Your line is open.
Hey, thanks for the question. So I think there’s been two really great parts of the Broadcom story that has surprised me. And the first is the AI upside. And the second is just the resilience of the core business and particularly storage and broadband in light of what have been kind of horror shows for some of your competitors who, I think, are in clear down cycles. So I’ve maybe been waiting for a reset in storage and broadband for a while, and it looks like Q4 gets a little softer here for you. Maybe you’re calling that reset a soft landing, Hock. So I guess maybe you can describe a little bit more for us what you mean by a soft landing. Does that mean that we have indeed landed here? Would you expect those businesses to be bottoming here at least? And I know you’ve talked about it before, you guys have had tight inventory management. But is there perhaps even a little bit more inventory showing up — more inventory burn showing up for these markets? Or are the dynamics here, just all and demand that has started to deteriorate here? Thanks.
Thanks. First and foremost, and you’ve heard me talked about in preceding quarter earning calls, and I continue to say it, and Kirsten reemphasized it today, we shipped very much to only end demand of our end customers. And we’re looking beyond in enterprise, even beyond and telcos even beyond OEM. We look to the end users, the enterprises of those OEM customers. We try to. Doesn’t mean we are right all the time, but we pretty much are getting very good at it. And we only ship to that. And what you’re seeing is — why I’m saying this — what you’re looking at, for instance, some numbers in broadband, some numbers in service storage that seems not quite as flat, which is why I made the point of purposely saying, but look at it collectively taking out generative AI. My whole portfolio of products out there. It’s pretty broad, and it gets segmented into different end markets. And when we reach, I call it a plateau as we are in, you got a soft landing, as you call it, you never stay flat. There will be some products because of timing of shipments come in more and some ship out the wrong timing come in a bit lower. And in each quarter, we may show you differences, and we are showing some of it that differences in Q3 and some even in Q4. And that’s largely related to logistics timing of customer shipments, particular customers and a whole range of products that go this way. This one I referred in my remarks as revenues, which are puts and takes around a median. And that median also paints to highlight to you guys has sit around $6 billion, and it has been sitting around $6 billion since the start of fiscal ’23. And as we sit here in Q4, it’s still at $6 billion. Now not quite there because there are some parts of it, they may go up, some parts of it go down. And that’s the puts and takes we talk about. And I hope that pretty much addresses what you are trying to get at, which is, is this — what’s — the fact that is it a trend? Or is it just a factor? And to use my expression, I call those flatters or puts and takes around a median that we’re seeing here. And I wouldn’t have said it if I’m not seeing it now for three quarters in a row, around $6 billion.
Thank you. One moment for our next question. And that will come from the line of Edward Snyder with Charter Equity Research.
Thank you very much. Hock, I want to shift gears maybe a little bit here and talk about your expectations and actually indications from your customers about the integrated optics solutions that will start shipping next year. It seems to me by looking at what you’re offering and the significant improvement you get over performance and size. This would be something of great interest. Is it limited by inertia or architectural inertia by the existing solutions? Or what kind of feedback you’re getting? And why should we expect to see maybe because it’s rather a new market for you overall. You’ve not been in it before. So I’m just trying to get a feel for what your expectations are and why maybe we should start looking at this more closely.
You should. I did. I made my investment, at least you should look at it a bit. I’m just kidding. But we have invested in silicon photonics, which is, I mean, literally integrating in one single solution packaging. As an example, our switch, our next-generation Tomohawk5 switch, which will start shipping middle of next year, what we call the program we call the Bailly, a fully integrated switch silicon photonic switch. And you’re right, very low power, and it’s — you make optics have always have optical and mechanical characteristics by sucking them into an integrated silicon photonics solution. You take away this failures on yield rates on mechanical, optical and translated to literally silicon yield rates. And so it’s much — we like to believe very reliable than the conventional approach. So your question is, so why won’t more people jump in it? Well, because nobody else has done it. This — we are pioneering this silicon photonic architecture. And we’re going to — we have appeal done, a POC, proof of concept in selling Tomahawk 4 in a couple of hyperscale, but not in production volume. We now feel comfortable we have reliability data from those instances. And that’s why we feel comfortable to now go into production launch in Tomahawk5. But as people say, the proof is in the eating. And we will get it in one or two hyperscale who will demonstrate how efficient power-wise, effective it can be. And once we do that, we hope it will start to proliferate to other hyperscalers because they cannot do it. If one of them does it and reap the benefits of this silicon photonics solution, and it’s there. You know it. I have indicated the power is simply enormous, 30%, 40% power reduction. And power is a big thing now in data centers, particularly, I would add, in generative AI data centers. That’s a big use case that could come over the next couple of years. All right.
Thank you. Thank you all for participating in today’s question-and-answer session. I would now like to turn the call over to Ms. Ji Yoo for any closing remarks.
Thank you, operator. In closing, we would like to highlight that Broadcom will be attending the Goldman Sachs Communacopia and Technology Conference on Thursday, September 7th. Broadcom currently plans to report its earnings for the fourth quarter of fiscal ’23 after close of market on Thursday, December 7th, 2023. A public webcast of Broadcom’s earnings conference call will follow at 2:00 P.M. Pacific. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call.
Thank you all for participating. This concludes today’s program. You may now disconnect.