Sign in

GSI - Q1 2024

July 27, 2023

Transcript

Operator (participant)

Ladies and gentlemen, thank you for standing by. Welcome to GSI Technology's Q1 Fiscal 2024, 2024 results conference call. At this time, all participants are in a listen-only mode. Later, we will conduct a question-and-answer session. At that time, we will provide instruction for those interested in entering the queue for the Q&A. Before we begin today's call, the company has requested that I read the following safe harbor statements. The matters discussed in this conference call may include forward-looking statements regarding future events and the future performance of GSI Technology that involves risks and uncertainties that could cause actual results to differ materially from those anticipated. These risks and uncertainties are described in the company Form 10-K, filed with the Securities and Exchange Commission.

Additionally, I have also been asked to advise you that this conference call is being recorded today, July 27, 2023, at the request of GSI Technology. Hosting the call today is Lee-Lean Shu, the company Chairman, President, and Chief Executive Officer. With him are Douglas Schirle, Chief Financial Officer, and Didier Lasserre, Vice President of Sales. I would like now to turn the conference over to Mr. Shu. Please go ahead, sir.

Lee-Lean Shu (Chairman, President, and CEO)

Good day, everyone, welcome to our Q1 fiscal year 2024 earnings call. We are happy to update you on our achievement of milestone on our journey towards innovation and growth. Our dedication and focus have allowed us to make good progress during our Q1 of fiscal 2024. Let's start with our progress on advancing our growth and innovation objective. In line with our commitment to land Gemini-I customers, we have moved forward to the demo with two of our SAR targets. Additionally, we add new resource to address the Fast Vector Search market a home our product for this application. Didier will provide more color on this in his comments. Additionally, I'm pleased to share that version two of our LPython compiler stack is on track for release to beta customers by the end of this summer.

This marks a significant step forward in our product roadmap, enabling us to deliver cutting-edge solutions and drive customer satisfaction. LPython is designed to make it easy for other developers to contribute and improve the software. The appeal of LPython is that it can be used on different operating systems, like Windows, Linux, and the macOS. The reason LPython is so fast is because it performs optimization at both a high level and a low level. This means it tries to make the code more efficient before running it. Additionally, LPython allow for easy customization of the different ways it can convert the code, which can be useful for specific needs or preferences. Not only is the LPython fast and flexible, but the stack is also usable for other applications, and we believe we could readily create an ecosystem beyond the APU.

We are closing in on successfully completing the tape-out of Gemini-II, which is expected to be finalized and sent off to TSMC in the next few weeks. This tape-out is a major achievement and showcase our commitment to push the boundaries of AI chip technology. Gemini-II is extremely complex chip, and the successful completion of this milestone serves as a testament to our talented team's hard work and expertise. We anticipate sampling the solution during the second half of calendar year 2024. We remain focused on driving innovation, delivering exceptional products, and leveraging those strengths to foster strategic partnerships that will help propel our company forward. The strategic addition to our team reinforce our commitment to drive growth, fostering partnership, and delivering innovative solutions to our to our customers.

We are excited about the opportunities and the value these individuals will bring to our organization as they work with our dedicated team to position us for success. I want to thank our employees, customers, and shareholders for their unwavering support and commitment. Together, we will continue to build a bright future for our companies. Now I will hand the call over to Didier, who will discuss our business development and sales activities. Please go ahead, Didier.

Didier Lasserre (VP of Sales)

Thank you, Lee-Lean. I want to start by addressing a point mentioned earlier by Lee-Lean. We have strengthened our team with the addition of two highly skilled professionals who will play pivotal roles in developing strategic partnerships with hyper-scalers and establishing our presence in the Fast Vector Search market. These individuals bring a wealth of knowledge and extensive experience in their respective fields. One of our new team members, who will assume the senior data scientist role, will lead our team on various projects and offload some of the workload from our division in Israel. With this team, we will transfer some functions to the US, including developing software applications, functions, and undertaking government-related projects that require collaboration with US-based employees. Our US data science team will play a crucial role in assisting customers with the compiler and conducting benchmarks across different platforms.

Our new data science scientists will collaborate with this team to optimize our plug-in for Fast Vector Search, paving the way for the successful deployment of this business line for our company. Our second new resource brings a wealth of experience from the semiconductor sector, having worked for the leading FPGA companies. This background has afforded him extensive industry connections, which will be invaluable as we strive to engage and form partnerships with our top hyperscalers. He will lead the building of our platform to explore strategic partners for our APU technology, to develop service and licensing revenue resources to fund future APU development. On the last call, we mentioned we were working with a major hyperscaler based on Gemini architecture for inference of large language models.

This relationship holds great potential for our growth. We recently added additional resources to this team. We have conducted a feasibility study exploring Gemini architecture, and I am delighted to say that we are making great progress in this prospect. The study specifically focuses on GPT inference utilizing a future APU. We found that the APU, when compared to existing technologies, can achieve significantly enhanced performance levels while utilizing the same process technology. GPT is a memory-intensive application. It requires a very large and very fast memory hierarchy from external storage memory all the way to the internal processor's working memory. In the GPT 175 billion model, 175 GB of fast memory is required to store the model's parameters.

This can be accomplished by incorporating a processor die and several HBMs, which are High Bandwidth Memories, and they'll be put on a 2.5D substrate. It also requires large internal memory and very fast internal memory next to the processor core, as a working memory to support the large matrix multiplication performed by the processor core. APU architecture has inherently large built-in memory and large memory bandwidth that not only provides memory throughput, but also supports very high-performance computation. Gemini can achieve similar peak TOPS per watt as state-of-the-art GPUs on the same process technology node. However, with our massive L1 size and large bandwidth, the APU can sustain average TOPS nearly the same as peak TOPS, unlike a GPU.

In a single module composed of a 5 nm Gemini [die plus] 6 HBM3 [die], we have calculated that we could achieve more than 0.6 token per second per watt, with the input size of 32 tokens, to generate a context of 64 tokens in GPT 175 billion model. This output is more than 60 times the performance that could be delivered by a state-of-the-art GPU in a slightly better technology node. This study was done in conjunction with laying out the development roadmap for Gemini-III to move further into generative AI territory. The APU holds a distinctive advantage in delivering low power consumption at peak performance levels, given the in-memory processing capability. As we have seen, generative AI applications, like ChatGPT, are becoming more capable with each generation.

The driving force behind this improvement capability is the number of parameters used by the large language model that power them. More parameters require more computation, leading to higher energy usage and a much larger carbon footprint. To help combat the carbon footprint growth, researchers are exploring new ways to compress data to reduce memory requirements. These are trade-offs between the formats that researchers are investigating. To navigate these trade-offs, they need a flexible solution. Unfortunately, GPUs and CPUs lack this flexibility and are limited to a small, fixed set of data formats. GSI Technology's APU technology provides the flexibility to explore new methods. By allowing computation to be performed at the bit level, computation can be performed on any size data element with a resolution as fine as a single bit.

This will allow innovative solutions to be developed and reduce energy by optimizing the number of usable bits for each data transfer. As we work with potential strategic licensing partners, we can increase the awareness of our capabilities to solve some of AI's biggest challenges. Regarding our work on Gemini-I solution, we have made notable progress with two of our SAR targets, underscoring our commitment to expanding our presence in this market. We have set a goal of closing a sale in FY 2024 with one of these customers. As I mentioned, we recently added resources to support our beta Fast Vector Search customers. With additional resources in place, we anticipate building a SaaS revenue source with customized solution for Fast Vector Search customers before the end of this fiscal year. Let me switch now to the customer and product breakdown for the Q1.

In the Q1 of fiscal 2024, sales to Nokia were $1.9 million, or 33% of net revenues, compared to $1.3 million, or 14% of net revenues in the same period a year ago, and $1.2 million, or 21.8% of net revenues in the prior quarter. Military defense sales were 33.8% of Q1 shipments, compared to 22.3% of shipments in the comparable period a year ago, and 44.2% of shipments in the prior quarter. SigmaQuad sales were 58.6% of Q1 shipments, compared to 44.8% in the Q1 of fiscal 2023, and 46.3% in the prior quarter. I'd now like to hand the call over to Doug. Please go ahead, Doug.

Douglas Schirle (CFO)

Thank you, Didier. GSI reported a net loss of $5.1 million, or $0.21 per diluted share on net revenues of $5.6 million for the Q1 of fiscal 2024, compared to a net loss of $4 million or $0.16 per diluted share on net revenues of $8.9 million for the Q1 of fiscal 2023, and a net loss of $4 million or $0.16 per diluted share on net revenues of $5.4 million for the Q4 fiscal 2023. Gross margin was 54.9% in the Q1 of fiscal 2024, compared to 60.2% in the prior year period and 55.9% in the preceding Q4.

The year-over-year decrease in gross margin in the Q1 of fiscal 2024 was primarily due to the impact of fixed manufacturing costs and our cost of goods on lower net revenue. Total operating expenses in the Q1 of fiscal 2024 were $8.2 million, compared to $9.3 million in the Q1 of fiscal 2023, and $6.9 million in the prior quarter. Research and development expenses were $5.2 million, compared to $6.6 million in the prior year period and $5 million in the prior quarter. Selling general and administrative expenses were $3 million in the quarter ended June 30th, 2023 compared to $2.7 million in the prior year quarter and $1.9 million in the previous quarter.

We estimate that through June 30, 2023, we have incurred research and development spending in excess of $140 million on our APU product offering. Q1 fiscal 2024 operating loss was $5.1 million, compared to an operating loss of $3.9 million in the prior year period and an operating loss of $3.9 million in the prior quarter. Q1 fiscal 2024 net loss included interest and other income of $80,000 and a tax provision of $51,000, compared to $26,000 in interest, other expense, and a tax provision of $60,000 for the same period a year ago.

In the preceding Q4, net loss included interest and other income of $101,000 and a tax provision of $191,000. Total Q1 pre-tax stock-based compensation expense was $820,000 compared to $638,000 in the comparable quarter a year ago, and $515,000 in the prior quarter. At June 30th, 2023, the company had $27.7 million in cash, cash equivalents and short-term investments, compared to $30.6 million in cash, cash equivalents and short-term investments at March 31st, 2023. Working capital was $32.1 million as of June 30, 2023 compared to $34.7 million at March 31, 2023, with no debt.

Stockholders' equity as of June 30th, 2023, was $48.6 million compared to $51.4 million as of the fiscal year ended March 31, 2023. During the June quarter, the company filed a registration statement on Form S-3, so that the company would be in a position to quickly access markets and raise capital if the opportunity arises. Operator, at this point, we'll open the call for Q&A.

Operator (participant)

Thank you. We will now be conducting a question-and-answer session. If you would like to ask a question, please press star one on your telephone keypad. A confirmation tone will indicate your line is in the question queue. You may press star two if you would like to remove your question from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before press the star key. One moment please, while we pull for questions. Our first question comes from Nick Doyle, Needham & Company. Please, sir, go ahead.

Nick Doyle (Equity Research Analyst)

Nick Doyle from Needham. Thanks for taking my questions. Just first, could you expand on the drivers behind the gross margin this quarter and next quarter? We see a little bit of a decline this quarter, and you expect it to increase next quarter. Could you just expand on, you know, why that's happening?

Douglas Schirle (CFO)

Yeah, it, it, it's really related to product mix. You know, we do our best effort at forecasting what we believe the revenues are gonna be during a quarter. Obviously, with, with only about a third or so of the quarter, looked at the beginning of the quarter, we have to estimate where the revenues are gonna come from. It, and it's strictly tied it up to product mix, nothing more.

Nick Doyle (Equity Research Analyst)

Okay. Could you just tell us, you know, what part of the mix was higher this quarter that's, that's driving a lower margin?

Douglas Schirle (CFO)

Yeah, it, it, the biggest thing that impacts the margin is that we have quite a bit of military business, and that has the highest margin. Alcatel-Lucent revenues are, are, I'm sorry, Nokia revenues are, are generally at a, at a reasonable level, and that also is good margin. It really is dependent on, you know, probably the biggest factor is military sales at this point.

Nick Doyle (Equity Research Analyst)

Okay, great. Makes sense. You talked about how you tested your APU, which can basically sustain higher TOPS and drive better performance per watt with the specific GPT application. Can you just expand on how that's done, how your APU differentiates from CPUs and GPUs on the market? Is it entirely to do with the abil- ability to do computations at the bit level? That's, that was my understanding. Yeah, any detail there would be great.

Lee-Lean Shu (Chairman, President, and CEO)

Yeah. First of all, GPU has a very, very small cache, and I think it's good for the graphic processing but when you talk about the huge, huge parameter in the large language model, they can only do a fraction of what they can do from the TOPS point of view. In the GPU, we have a huge, huge memory inside the chip, and we calculate the TOPS strictly from the, how we can support the processing with our memory. Okay, that's how we come up with the TOPS. That's why we have average TOPS, just same as our peak TOPS. Okay, I hope I answered your question.

Nick Doyle (Equity Research Analyst)

Okay. If I could just sneak one more. I think in the past, you've talked about the cost of Gemini-II is about $2.5 million. Is that still the case, and is that entire tape-out cost behind us, or it's still ongoing?

Lee-Lean Shu (Chairman, President, and CEO)

No, no, sorry, just the, just the, just the tape-out cost.

Douglas Schirle (CFO)

The $2.5 million is the tape-out cost, so we will have a tape-out, the expense could hit later this quarter or the early part of the October quarter. That, that's just the tape-out quarter. You know, we've incurred, as we said in, in our comments, the probably in excess of about $140 million developing this product line, and that's both Gemini-I and Gemini-II.

Nick Doyle (Equity Research Analyst)

Great, thanks.

Lee-Lean Shu (Chairman, President, and CEO)

Just one comment. We publish a white paper on our website, and we have a further discussion on the, you know, why APU is good for the large language model. If you're interested, look at the www.gsitechnology.com.

Operator (participant)

Just to remind you, if you'd like to ask a question, please press star one on your telephone keypad. One moment, please, while we pull for questions. Our next question comes from Luke Bohannon, Private Investor. Please, sir, go ahead.

Luke Bohannon (Private Investor)

Thanks. In terms of that study, could you mention that that was projecting a 5 nm architecture? Yeah, for the, you have a study about comparing with GPUs and, and peak performance?

Lee-Lean Shu (Chairman, President, and CEO)

Correct.

Luke Bohannon (Private Investor)

I've, I supposing, based on your understanding of the engineering, the physics of your APU architecture, that you project that that is feasible. Is that the case? Can you project even further to say that, yeah, there is a limit that's lower than, you know, in terms of reducing to yet even more dense nan, more dense architecture?

Lee-Lean Shu (Chairman, President, and CEO)

Yeah, we pick 5 nm because at this moment, the state-of-the-art processor is, either at a 5 or 4 nm. We want to have, you know, Apple to Apple comparison. We pick the 5 nm as a study base. Of course, do we want to implement or do you chip? I think we want to do it with even, you know, more advanced technology. You know, just same as everybody else.

Luke Bohannon (Private Investor)

Okay. Yeah, that is the, the tentative plan is to make the leap, basically from your current, I think you said, 16.

Lee-Lean Shu (Chairman, President, and CEO)

Yes.

Luke Bohannon (Private Investor)

With the Gemini-II, all the way to the five and for Gemini-III.

Lee-Lean Shu (Chairman, President, and CEO)

Yes.

Luke Bohannon (Private Investor)

Excellent.

Lee-Lean Shu (Chairman, President, and CEO)

No, no, Gemini, I'm sorry. Gemini-III, we is to be determined. We pick 5 nm just because everybody else is on the 5 nm so it's a fair comparison.

Douglas Schirle (CFO)

Right. So, yeah, that, that 5 nm was picked just, just for a comparison for the study, because that's what, as Lee-Lean Shu just said, that's what the GPUs are on, is 5 nm. We wanted to do a straight comparison on technology. That does not mean Gemini-III would be on that technology. It could be something more aggressive.

Luke Bohannon (Private Investor)

Okay. Yeah, not a limit point.

Didier Lasserre (VP of Sales)

Correct.

Luke Bohannon (Private Investor)

Excellent. And in terms of the, you all having larger memory cache, all the other advantages of flexibility, memory, flexibility in the memory that I read about in the white paper, how does that apply to comparing the APU to GPUs and machine vision, for both like real world vision, talking about EVs, autonomous vehicles, and kind of referencing the Tesla earnings call, saying that they're buying as many NVIDIA GPUs as they can get their hands on, and you all's earlier references of being able to apply the APU to that market, as well as more of the abstract machine vision, drug discovery and genetic medicine, things like that, are you seeing still similar advantages?

Didier Lasserre (VP of Sales)

Yeah. I mean, the advantage, yes. The answer is, you know, our Gemini-I, we understood, was not a fit for what you talked about, ADAS. Gemini-II, we anticipate to be a better fit just because of the lack of an FPGA on the board with the Gemini-II. The fundamental unique architecture is gonna be the same, which is the fact that, you know, we're doing the computation or the search on the memory bit line in place. We're not going off chip to fetch the data and then going back and rewriting the data. That's, you know, the fundamental unique architecture that we have is in regardless of the market and is, you know, available or, you know, there with Gemini-I and Gemini-II.

Luke Bohannon (Private Investor)

Awesome. I just wanted to get that clarification about... Because since we talked about the performance being kind of, or for GPUs being apt for visual processing. I wanted to get that clarification about the more broader kind of machine vision, visual processing markets there. That's, that's great. I think I have one more question. Definitely applaud you all for getting moving forward with the SaaS and vector search, vector search, because there have been so many announcements recently about the value of large vector search, NLP, neural networks broadly, and seeing how much, you know, that TAM you all can address. It's definitely good to hear that you're putting some more traction through that pathway.

One just kind of funny curiosity. I've noticed the name Gemini associated with accelerated computing, most recently, most prominently with Google. It's always made sense to me in terms of parallel processing, the name Gemini and historical reference, wondering, SpinQ and Google have now also adopted Gemini. I'm wondering if that is at all an encroachment on your all's intellectual, you know, your trademark, or if you find that to just be a kind of a humorous affirmation since you're the first Gemini.

Douglas Schirle (CFO)

No, we definitely looked into it, and the issue we have is that our trademark is for a hardware device, a semiconductor device, and Google is software related, so there's no overlapping.

Luke Bohannon (Private Investor)

Okay, that makes sense. Okay, so, has anything shifted, I'm not sure if you've actually crunched numbers, but in terms of, you have your TAM, and you have SAM and these new focuses on the large language models, yeah, how do you see kind of the concrete, yeah, your concrete addressable market projections updated at, at this point in terms of timeline and, and size?

Didier Lasserre (VP of Sales)

Yeah, so we're still working on those TAMs for that. You know, and there's different segments, right? You have the retrieval, and you have the generative. So, you know, those are two different areas. We can certainly address the retrieval now with Gemini-I and Gemini-II, and we certainly feel for the generative side, it's gonna be more with Gemini-III. But yeah, we're working on those TAM SAMs now. They're just not available yet.

Luke Bohannon (Private Investor)

Yep. Yeah, I know it's a hard thing to, to value, which is reflecting in, yeah, all over the analyst side of things. Yeah, that's. I think that's all I've got. Thank you.

Didier Lasserre (VP of Sales)

Thank you, Luke.

Operator (participant)

Our next question came from Jeffrey Bernstein, TD Cowen. Please, sir, go ahead.

Jeffrey Bernstein (Director)

Yeah. Hi, guys. A couple of questions for you. One, just on that, the last answer, you were talking about Gemini-I and Gemini-II, addressing retrieval. You mean queries there? When you say addressing generative, are you talking about training or just clarify that a little bit.

Didier Lasserre (VP of Sales)

The response, right? Yeah, so you're retrieving the data, and that's something we do very well now, but it's really generating the response. We, you know, that requires very, very high memory bandwidth, which we're, which we have, and very, very high memory cache in general. That's why we talked about pairing up with HBM3 for that. That's, and that's more on the generative side.

Jeffrey Bernstein (Director)

Okay, so, so training at Gemini-III?

Lee-Lean Shu (Chairman, President, and CEO)

No, no, no, no, no, no. No. Okay. Inference.

Didier Lasserre (VP of Sales)

It's still inference. Yeah, it's not training.

Jeffrey Bernstein (Director)

Oh, okay. Still inference. Okay. Then, as long as you were talking about the potential for a 5 nm, or more aggressive, kind of Gemini-III, line with, what is the current tape-out cost, for I know that you're not a processor, you're more like a memory, so it might be less expensive, but what do you think a tape-out cost of 5 nm would be now?

Lee-Lean Shu (Chairman, President, and CEO)

Well, 5 nm, the mask cost itself about $15 million, one side. To have a design like a 5 nm, we probably need to have $100 million for the design. What we are doing right now is we are looking for the partner. We are not planning to do it ourselves.

Jeffrey Bernstein (Director)

Yeah.

Lee-Lean Shu (Chairman, President, and CEO)

The partnership.

Jeffrey Bernstein (Director)

Okay.

Lee-Lean Shu (Chairman, President, and CEO)

Yeah.

Jeffrey Bernstein (Director)

I just wanted to talk about the capital situation. You've, you've now got a registration statement in place. Unfortunately, you missed the big run up in the stock. Why wouldn't you preferentially sell and lease back the headquarters for funds, and then have some more tangible progress to show before we started talking about raising equity?

Didier Lasserre (VP of Sales)

Well, we, we have looked into the sale of the building, and, and we haven't decided to do that yet, but that still is an option. You know, property values are significantly higher than, than when we purchased the building many, building many years ago. It is an opportunity.

Jeffrey Bernstein (Director)

Yeah

Didier Lasserre (VP of Sales)

... that we have considered, and we've discussed it with the board, but no decision as of yet has been made to sell the building.

Jeffrey Bernstein (Director)

Gotcha. Okay. Then, just on the Nokia business, that, I, if I remember correctly, you guys were in, now at this point, the pretty old Nokia 7970 and 7950 routers. I, I don't even see any, any reference anymore to the 50. What's going on there? You know, how much lead time would you get if they were, were end-of-lifing that? Would there be some kind of lucrative end-of-life, you know, revenue that you might get out of that, et cetera? Just give us a little feeling for your, your understanding of, of, of where you are with the Nokia business.

Didier Lasserre (VP of Sales)

Sure. Yeah, as you said, it's, you know, in the 7750 and 7950 platforms there, and they have extremely long life cycles, as, you know, we, we've been seeing. We get a 12-month rolling forecast from Nokia, so far, and that's as far as they go, and the 12 months still looks healthy. What they did do a while back is they did what's called a midlife kicker to try and give a little bit more performance to those existing systems. What that meant for us is that it went from a 72 Mb density into a 144 Mb density part for that midlife kicker. So the, the ASPs are obviously higher on the larger density part.

What we saw is, even though some of the volumes have come down over time, it's been fairly flat on the revenue side just because the, the, increase in the ASPs offset the decrease in the, in the, in the quantity. At this point, you know, it's still going. We still have the, the 12-month forecast, it looks healthy, and that's as much visibility as we get.

Jeffrey Bernstein (Director)

Gotcha. Then obviously, there's some movement, around the chip shortages and packaging shortages and that, and that kind of thing. Are we now to a more normalized rate here going forward?

Didier Lasserre (VP of Sales)

The lead times have become more normalized. The pricing or the costs have not. The, the price increases that, that were subjected to us, which in turn, forced us to raise prices to our customers, they're still there.

Jeffrey Bernstein (Director)

Yeah.

Didier Lasserre (VP of Sales)

You know, we've kept our ASPs up, and we'll keep them there until there's any kind of movement from TSMC or, you know, or any of the substrate folks that raise their prices. At this point, the real change is the lead time. Lead times have come down to a more normalized area.

Jeffrey Bernstein (Director)

Gotcha. Just in terms of inventories, we should be at a more normal kind of inventory situation going forward here?

Didier Lasserre (VP of Sales)

Yeah. Yes, that's what we fully believe, and, and, our inventories have dropped in the last quarter, too, and we expect them to drop the next couple quarters or so.

Jeffrey Bernstein (Director)

Great. Thank you.

Operator (participant)

One moment, please, while we pull for questions. Our next question comes from George Gasper, Private Investor. Please go. Please, sir, go ahead.

George Gaspar (Research Analyst)

Thank you. It's George Gasper. Just again, I'd like to deal on the financing situation. Based on your current cash position, and, and looking at your current development progress profile, what do you see is your forward view on the need to exercise financing requirement?

Douglas Schirle (CFO)

W ell, at, at this point, you know, given the, the materials we've discussed with the board, that, you know, this fiscal year, we'll, we'll certainly, burn some cash, maybe $12 million-$13 million if, if, the revenue numbers hold up. If the revenue numbers hold up, next year, you know, we, we could start turning the corner and actually having more cash at the end of fiscal 2025 than at the end of fiscal 2024.

George Gaspar (Research Analyst)

I see. What you're saying is that at this based on the way you're moving along, that your present cash position is sufficient for what you're talking, what your targets are and the development that you see over the next year?

Douglas Schirle (CFO)

Currently, that's true. That's the situation.

George Gaspar (Research Analyst)

That is. Okay. All right. Thank you. Thank you.

Operator (participant)

Thank you. There is no further question at the time. I would like now to turn the floor back over to Mr. Shu for closing comments. Please, sir, go ahead.

Lee-Lean Shu (Chairman, President, and CEO)

Thank you all for joining us. We look forward to speaking with you again when we report our second quarter fiscal 2024 results. Thank you.

Operator (participant)

This concludes today's teleconference. You may disconnect your lines at this time. Thank you for your participation.