Sign in

GSI - Q4 2023

May 16, 2023

Transcript

Operator (participant)

Greetings, thank you for standing by. Welcome to the GSI Technology fourth quarter and fiscal 2023 results conference call. At this time, all participants are in a listen-only mode. Later, we will conduct a question-and-answer session. At that time, we will provide instructions for those interested in entering the queue for the Q&A. Before we begin today's call, the company has requested that I read the following safe harbor statement. The matters discussed in this conference call may include forward-looking statements regarding future events and the future performance of GSI Technology that involve risks and uncertainties that could cause actual results to differ materially from those anticipated. These risks and uncertainties are described in the company's Form 10-K filed with the Securities and Exchange Commission.

Additionally, I have been asked to advise that this conference call is being recorded today, May sixteenth, 2023, at the request of GSI Technology. Hosting the call today is Lee-Lean Shu, the company's chairman, president, and chief executive officer. With him are Douglas Schirle, chief financial officer, and Didier Lasserre, vice president of sales. I would now like to turn the conference over to Mr. Shu. Please go ahead, sir.

Lee-Lean Shu (Chairman, President, and CEO)

Good day, everyone, and welcome to our fiscal fourth quarter and full year 2023 financial results earning call. The 2023 fiscal year was filled with many positive developments, new partnership, and progress toward achieving our goals. We also experienced setback and unforeseen delays on several fronts with APU. We learned a lot during the year about the addressable market Gemini-I can reasonably pursue with our team given our limited resource. However, we recently have made significant strides in leveraging third-party resources to help identify users, resellers, and OEMs. These resources are proving valuable in helping us identify opportunities for capturing revenue and increasing awareness of the APU tremendous capabilities. We have also sharpened our focus for Gemini-I to leverage our resources and prioritize near-term opportunities such as synthetic aperture radar, or SAR, and satellites where we have a superior solution.

We understand these markets and know whom we can support and help with our offering. Another focus application for Gemini-I is vector search engines, where our APU plugin has demonstrated enhanced performance. To this end, we have de-dedicated more resources and prioritized the target customers that have expressed interest in leveraging our solution. Our data science team has been busy working on the SaaS search project with one leading provider, and we plan to pivot to other players in the space once we have met our deliverables with the first partner. Looking ahead on our roadmap, we will build upon the work we are doing today in future APU versions to address large language model of LLM for natural language processing. Vector search engines are a fundamental part of ChatGPT architecture and essentially function as the memory for ChatGPT.

Large language models use deep neural networks, such as transformers, to learn billions or trillions of words and produce text. This is another reason that vector search is an appropriate focus application with the APU. Additionally, we are improving our search and AI SaaS platform to support our go-to-market strategy for search. We intend to use this tool to develop more potential partnership, like an OpenAI plugin integration that we recently launched, and with other open source decentralized search engine that use machine learning algorithm and vector search engines. The increasing size and complexity of enterprise datasets and the proliferation of AI in all aspects of business are driving rapid growth in these search engines. Encouraged by the positive reception of our APU plugin by several key players, we are optimistic about generating modest revenue from this market in the fiscal year 2024.

For both of the Gemini-I focus application I have just mentioned, SaaS and Fast Vector Search, we have set specific revenue goals that we aim to achieve this fiscal year. Our Python compiler stack has progressed in the past quarter. Our Python compiler stack is designed to offer Python's development advantage while delivering C's high performance without compromising either. Almost all current focus applications do not require a compiler. We have a beta version in use currently and are on track to release a production-ready version later this year. Your pattern will demystify the APU for any person or C developer. I'm excited to announce that we are on track to complete the tape out for Gemini-II by this summer, and evaluate the first silicon chip by the end of calendar year 2023.

We aim to bring this solution to market in the second half of 2024. Gemini-II's design will provide significant performance enhancement, which will reduce power consumption and latency. This feature will expand the future addressable market for the APU to larger markets such as edge application, Fast Vector Search, LLM, and advanced driver assistance system or ADAS. The last one being the vertical we could go after with a strategic partner rather than directly. Gemini-II is built with a TSMC 16nm process. The chip contains 6 megabytes of associative memory connected to 100 megabytes distributed SRAM with 45 TB per second bandwidth or 15 times the memory bandwidth of the state-of-the-art parallel processor for AI. This is more than four times the processing power and eight times of memory density compared to Gemini-I.

The Gemini APU is built with bit processing, which allows fully flexible data format for applications, an inherent advantage versus other parallel processors. Gemini-II is a complete package that includes a DDR4 controller and external interface for PCIe Gen4 by 16 and PCIe Gen4 by 4. This integration solution allows Gemini-II to be used in affordable edge applications while still providing significant processing capabilities. In simpler terms, Gemini-II combines different components together, allowing to be used in less expensive devices while still being powerful enough to handle demanding tasks at edge of a network. Put another way, Gemini-II brings data center capability to the edge. This mean that computationally intense application can be done locally. For example, ADAS, delivery drones, autonomous robot, and UAV or unmanned aerial vehicle and satellites.

Another application for Gemini-II could be IoT edge application like critical infrastructure or process requiring a reliable and efficient operation. For example, wind farms to mitigate failure mode that can lead to significant financial losses or operational disruptions. Gemini-II's combination of high processing power, large built-in memory with tremendous bandwidth and low cost solution provides the best-in-class solution for AI applications like Fast Vector Search, a growing market driven by the proliferation of big data and the need for fast and accurate processing. Recently, we were granted a new patent for Gemini-II's in-memory full adder, which is a basic building block to allow Gemini-II to perform high processing power. We are thrilled to announce that we are currently in very early-stage discussions with a top cloud service provider to explore how Gemini-II's foundational architecture could deliver performance advantage.

Just this year, we have seen the disruptive impact of large language models to understand and generate human-like language like ChatGPT, Microsoft Bing, and Google's Bard. As the boundary of natural language processing continue to be pushed, we envision abundant opportunity in this market for Gemini-II and future version of the APU. We believe that we have merely scratched the surface of the potential of large language models and the transformative impact they can have across numerous fields. Large language models attention memory requires very large built-in memory and very large memory bandwidth on-chip. The state-of-the-art GPU solution has built-in 3D memory to address the high-capacity memory requirement but has a poor memory bandwidth for adequate memory access. The limitation is going to get worse as large language models are progressing. Gemini chip architecture has inherently large memory bandwidth.

It is the natural migration to add 3D memory for the next generation Gemini chip to address large memory requirement. This substantial improvement potentially translate into orders of magnitude better performance. As a result, we could be strongly positioned to compete effectively in the rapidly expanding AI market, staying ahead of the industry-leading competitors. Our resources and team are focused on applications where we have a high probability of generating revenue to capitalize off Gemini-I's capabilities. As we bring Gemini-II to market, we will be more experienced in approaching target customers and creating new revenue streams. We are formulating our roadmap for the APU, which holds tremendous potential. With the future versions, the APU has the capability to cater to much larger markets, and the potential opportunity are quite promising.

In parallel with our board of directors, we are actively exploring various options to create shareholder value. I remain fully committed to drive sustained growth and innovation in the year ahead. Thank you for your support and for joining us today. We look forward to updating you on our progress in the coming quarters. I will hand the call over to Didier, who will discuss our business performance further. Please go ahead, Didier.

Didier Lasserre (VP of Sales)

Thank you, Lee-Lean. As Lee-Lean stated, we have sharpened our focus on a few near-term APU revenue opportunities. In addition, we have strengthened our team with a top data science contractor whose primary job is to accelerate the development of our plug-in solution for the high-performance search engine platforms that Lee-Lean mentioned. We have also begun working with a company that offers custom embedded AI solutions for high-speed computing using Gemini-I and Gemini-II. Another critical development to improve our market access for the APU has been adding distributors. We are pleased to announce that we have added a new distributor for our radiation hard and tolerant SRAM, but also our hardened APU for the European market. In addition to our partnerships and focus on near-term opportunities, we plan to build a platform to enable us to pursue licensing opportunities.

This is in the very early stages, and we have work to do before we formally approach potential strategic partners. That said, we have a few preliminary... I'm sorry, we have had a few preliminary conversations on determining what is required to integrate Gemini technology into another platform. This would allow us to identify the specific performance benefits for partners' applications to ensure effective communication of the problem we solve in their system or solution. We recently demoed the Gemini-I for a private company specializing in SAR satellite technology. They provide high-resolution Earth observation imagery to government and commercial customers for disaster response, infrastructure monitoring, and national security applications. The satellites are designed to provide flexible on-demand imaging capabilities that customers can access worldwide.

They recently provided the datasets to conduct comparison benchmarks on the Gemini-I. We are commencing the process of running those benchmarks. SAR is 1 market we anticipate that we can generate modest revenue with Gemini-I this fiscal year. GSI was recently awarded a phase 1 Small Business Innovation Research, also known as SBIR. SBIR is a U.S. government program that supports small business R&D projects that could be commercialized for specific government needs. For this contract, we will collaborate with the Air and Space Force to address the problem of edge computing in space with Gemini-I. Gemini-I is already radiation-tolerant, making it particularly well suited for Space Force missions. This contract is a milestone for GSI Technology, as it will showcase the APU's capabilities for the military and other government agencies and provide great references for similar applications.

We have submitted other proposals for a direct to phase 2 project, and other SBIR proposals are in the pipeline. On that note, we received verbal confirmation just this morning that we have been awarded a research and development contract, which could be worth up to $1.25 million to integrate GSI's next generation Gemini-II for Air and Space Force mission applications. This revenue will be recognized as milestones are achieved, and a typical timeframe is 18 months to 2 years. Once the agreement has been finalized and executed, we will issue a press release with full details. Let me switch now to the customer and product breakdown for the fourth quarter.

In the fourth quarter of fiscal 2023, sales to Nokia were $1.2 million or 21.8% of net revenues, compared to $2 million or 23.1% of net revenues in the same period a year ago, and $1.3 million or 20% of net revenues in the prior quarter. Military defense sales were 44.2% of fourth quarter shipments, compared to 22.3% of shipments in the comparable period a year ago, and 26.2% of shipments in the prior quarter. SigmaQuad sales were 46.3% of fourth quarter shipments, compared to 47.6% in the fourth quarter of fiscal 2022, and 45.2% in the prior quarter. I'd now like to hand the call over to Doug. Go ahead, Doug.

Douglas Schirle (CFO)

Thank you, Didier. I will start with the fourth quarter results summary, followed by a review of the full year for fiscal 2023 results. GSI reported a net loss of $4 million or $0.16 per diluted share on net revenues of $5.4 million for the fourth quarter of fiscal 2023, compared to a net loss of $3 million or $0.12 per diluted share on net revenues of $8.7 million for the fourth quarter of fiscal 2022, a net loss of $4.8 million or $0.20 per diluted share on net revenues of $6.4 million for the third quarter of fiscal 2023.

Gross margin was 55.9% in the fourth quarter of fiscal 2023, compared to 58.6% in the prior year period, and 57.5% in the preceding third quarter. The decrease in gross margin in the fourth quarter of 2023 was primarily due to the effect of lower revenue on the fixed costs and our cost of goods sold. Total operating expenses in the fourth quarter of fiscal 2023 were $6.9 million, compared to $8.1 million in the fourth quarter of fiscal 2022 and $8.5 million in the prior quarter. Research and development expenses were $5 million compared to $6.5 million in the prior year period and $5.5 million in the prior quarter.

Selling, general and administrative expenses were $1.9 million in the quarter ended March 31st, 2023, compared to $1.5 million in the prior year quarter and $3 million in the previous quarter. Fourth quarter fiscal 2023 operating loss was $3.9 million, compared to an operating loss of $2.9 million in the prior year period and an operating loss of $4.8 million in the prior quarter. Fourth quarter fiscal 2023 net loss included interest and other income of $101,000 and a tax provision of $191,000, compared to $47,000 in interest and other expense and a tax provision of $21,000 for the same period a year ago.

In the preceding third quarter, net loss included interest and other income of $61,000 and a tax provision of $84,000. Total fourth quarter pre-tax stock-based compensation expense was $515,000, compared to $714,000 in the comparable quarter a year ago, and $654,000 in the prior quarter. For the fiscal year ended March 31, 2023, the company reported net loss of $16 million or $0.65 per diluted share on net revenues of $29.7 million, compared to a net loss of $16.4 million or $0.67 per diluted share on net revenues of $33.4 million in the fiscal year ended March 31, 2022. Gross margin for fiscal 2023 was 59.6% compared to 55.5% in the prior year.

The increase in gross margin was primarily due to product mix. Total operating expenses were $33.5 million in fiscal 2023, compared to $34.9 million in fiscal 2022. Research and development expenses were $23.6 million, compared to $24.7 million in the prior fiscal year. Selling general and administrative expenses were $9.9 million, compared to $10.2 million in fiscal 2022. The decline in research and development expenses was primarily due to the cost reduction measures announced by the company in November 2022. The operating loss for fiscal 2023 was $15.8 million, compared to an operating loss of $16.4 million from the prior year.

The fiscal 2023 net loss included interest and other income of $202,000 and a tax provision of $372,000, compared to $60,000 interest and other expense and a tax benefit of $45,000 a year ago. On March 31st, 2023, the company had $30.6 million in cash equivalents and short-term investments, with no long-term investments, compared to $44 million in cash equivalents and short-term investments, with $3.3 million in long-term investments at March 31st, 2022. Working capital was $34.7 million as of March 31st, 2023, versus $45.8 million at March 31st, 2022, with no debt.

Stockholders equity as of March 31, 2023 was $51.4 million, compared to $64.5 million as of the fiscal year ended March 31, 2022. Operator, at this point, we will open the call to Q&A.

Operator (participant)

Thank you. If you would like to register for a question, please press the one followed by the four on your telephone keypad right now. You will hear a three-tone prompt to acknowledge your request. If your question has been answered and you would like to withdraw your registration, please press the 1 followed by the three. One moment please for the first question. The first question comes from the line of Raji Gill with Needham. Please proceed with your question.

Nick Doyle (Equity Research Analyst)

Hi, this is Nick Doyle on for Rajvindra Gill. Two questions on Gemini-II. Are all the costs related to the tape-out and then the test and volume production contemplated in your current outlook? Could you expand on what kind of applications you're seeing traction in with that Gemini-II, specifically anything in ADAS and then using the large language models? Thanks.

Douglas Schirle (CFO)

Yeah. In terms of R&D spending, yeah, most of what we're spending today is on Gemini-II. You know, we have the hardware team here in Sunnyvale and a software team in Israel. There will be a tape out in the first half of fiscal 2024 for Gemini-II. It'll run probably about two and a half million dollars. Other than that, the R&D expenses should be similar to what we've seen in the most recent quarter.

Didier Lasserre (VP of Sales)

Regarding the applications, you cut out. Were you talking Gemini-I or Gemini-II?

Nick Doyle (Equity Research Analyst)

Gemini-II, please.

Didier Lasserre (VP of Sales)

Yeah. Gemini-II. Gemini-II, you know, ADAS, as we discussed in the, in the conversation before, is something we want to address, but we most likely will use a partner to do that. As far as the large language models, as we discussed, we certainly feel that the Gemini technology, you know, the advantage in the technology certainly will be applicable there. Whether it's, you know, start with Gemini-II or if it's also, customized with the Gemini Three is to be determined.

Nick Doyle (Equity Research Analyst)

Okay, that makes sense. Just a quick one. Did you say if there is a timeline? Is there a timeline for the rad-hard roadmap for the product you mentioned in the EU?

Didier Lasserre (VP of Sales)

The rad-hard and rad tolerant SRAMs are available today. We have done some testing. It's been at least a year and a half. We did the testing on the APU Gemini-I specifically. It came back very favorable, but the beam was a little bit off that day, so it was limited the test we could do. We are actually gonna do the full complement of radiation testing in the second half of this year, so we have all the data requirements for the folks that will be sending it into space. Officially, the APU will be rad tolerant sometime by the end of this year.

Nick Doyle (Equity Research Analyst)

Makes sense. Thank you.

Operator (participant)

The next question comes from the line of Krish Sankar with TD Cowen. Please proceed with your.

Krish Sankar (Managing Director)

Hi guys. Just a couple of questions for me. Just wanna make sure I heard right. You brought on a consultant that's helping target applications for Gemini-I, is that right?

Didier Lasserre (VP of Sales)

They're specifically helping us write the interfaces for some of the Fast Vector Search platforms that are out there.

Krish Sankar (Managing Director)

Gotcha. Then, you said, there's a custom embedded AI solutions supplier and that guy's gonna now integrate Gemini-I into some high performance compute solutions for clients. Am I getting that right?

Didier Lasserre (VP of Sales)

Partially. It's not limited to Gemini-I. It's Gemini-I and Gemini-II. They have a multitude of different potential applications, ranging from SAR to satellite applications to marine search and rescue. There's a lot of different applications that they're looking at it for. Some of the cases, they'll be able to use essentially our leaderboards, but in many cases, they will be developing their own ultra small boards for some of these applications that, you know, our boards are considered a little too big for those applications. It's a multitude of different applications, and it will be for both Gemini-I and Gemini-II.

Krish Sankar (Managing Director)

Gotcha. Okay. Then as far as the large language model, kinda applications, I think there's two potentially, correct me if I'm wrong. One is just to run queries as opposed to do training and just run queries of these large matrices quickly and at low power. I guess the other one has to do with making training more efficient, by being able to not redo matrices over and over again as you do new learning. Is that right? Which are we talking about here today? Having some, you know, seeing some light today.

Lee-Lean Shu (Chairman, President, and CEO)

Yes. Our primary target will be to the search, which is the inference part. Okay.

We are not on the training path. Okay. If you can do search efficiently, you can help the training. Okay. Like we can do zero-shot learning, okay, or one-shot learning, which mean you don't need to train the data set. You know, if you have a first query coming in we don't recognize, we can store into our memory chip. The second time the similar item come in, then you can recognize it right away. That's very different from the traditional training. Okay. Traditional training, you have to run the whole model, whole data set all over again. That's very, very time-consuming. If you can do zero-shot learning, you have a capability to do that, then you can save the training process tremendously.

Krish Sankar (Managing Director)

That's great. Thank you.

Lee-Lean Shu (Chairman, President, and CEO)

Yeah.

Operator (participant)

The next question comes from the line of Orin Hirschman with AIGH Investment Partners. Please proceed with your question.

Orin Hirschman (Managing Member and CEO)

Hi, how are you?

Didier Lasserre (VP of Sales)

Good.

Orin Hirschman (Managing Member and CEO)

One of the things that the Gemini architecture, you know, in-memory processing architecture is very good at, which really wasn't of tremendous interest when you first introduced Gemini, was this natural language processing. All of a sudden, you know, the whole world has changed, and you've got things like ChatGPT and other similar types of NLP situations where it actually exactly fits in to what you do best. I guess, you know, it sounded like from one of the prior comments in the last question that you're actually having code and drivers written to be able to optimize the use of Gemini-I and certainly Gemini-II for this application. I would think that One of the simple applications where you could sell a lot of boards is just simply on the acceleration where everybody is having difficulty using GPUs because this is not where a GPU shines on an AI side in terms of the NLP in order to accelerate something like ChatGPT.

Lee-Lean Shu (Chairman, President, and CEO)

What's the question again?

Orin Hirschman (Managing Member and CEO)

Yeah. The question is, isn't that, in fact, is that a priority in terms of what you're working on to be able to introduce your own acceleration boards to do it with partners or, you know, is it, in fact, a, you know, great application? It sounds like certainly so far in the call that it's a great application for the Gemini APU.

Lee-Lean Shu (Chairman, President, and CEO)

Okay, I think I discussed this one on my statement. The biggest challenge for the large language model are two form. First one, you need a very large memory. The second one, you need a very high bandwidth memory. Those are two very difficult thing to achieve, okay. I think today at the market, in the market, nobody has this solution, the good solution, okay. Just as I mentioned, we do have a very exciting discussion with, we call it a larger cloud service provider, and to see how we can help from our Gemini foundational architecture to see how we can help to move this thing forward. Okay, we already have a very, very good memory bandwidth. Okay.

As I mentioned in my statement, I say we are 15 times memory bandwidth of the today's state-of-the-art GPU, okay, or parallel processor, okay. That's all inherent architecture, okay. If we can add this one to the high memory capacity, then this is really the solution nobody in the market can provide, okay? Now we very excited. We try to explore this the advantage we have and see where we can go from here.

Orin Hirschman (Managing Member and CEO)

Do you know any ideas when you will the coding for the interface to be able to demo the type of acceleration gains that we're talking about with something like a ChatGPT or something like that, so customers can actually see some type of benchmarking even with Gemini-I and maybe a simulation until Gemini-II is ready? When will that code be ready? I know that that's what you mentioned you're working on.

Lee-Lean Shu (Chairman, President, and CEO)

Yes, the OpenAI, they have a plugin, okay. basically, you can put your software to plugin to the main machine, and then, you can utilize the existing model and then do the plugin. Right now we are working on it, okay. Gemini-I definitely, and the Gemini-II, you know, follow on. You can extrapolate from how well those are working and then extrapolate to the future.

Orin Hirschman (Managing Member and CEO)

Any ideas when we might see some benchmarks in coming months?

Lee-Lean Shu (Chairman, President, and CEO)

I would say maybe a quarter or two, we'll have something to tell you guys.

Orin Hirschman (Managing Member and CEO)

Okay. Just a related question, but even more futuristic. You know, there's talk of doing something similar to, let's say, and there are a number of projects, and in fact, even you had a early project with MUVE, to be able to show off what you can do in terms of visuals as well. I guess my question is, taking that same natural language processing and doing it on a visual level is beyond belief in terms of computationally intensive, but also well-suited for what you guys do. Is anybody talking about doing anything like that? You know, obviously you did that early demo, which impressed a lot of people, but yeah, obviously that's even a step beyond what, almost what people have dreamed of today. The, you know, you can't do that using current architecture.

Any, thoughts on that from a futuristic perspective? Will that need a Gemini-III, or can that even be done on a Gemini-II? One last follow-up question.

Lee-Lean Shu (Chairman, President, and CEO)

you're asking whether what we want to do in the future generation? Is that?

Orin Hirschman (Managing Member and CEO)

No, specifically, the more important part to me is just in terms of incredible, you know, visual search capabilities, almost like NLP search capabilities on visuals. You know, you did that impressive early demo with MUVE and, you know, there's been some other experimental projects, and people all over the world are starting to do experimental projects on massive amount of visual data. You know, any more thoughts as to, you know, that's obviously very suitable or uniquely suitable for what you do versus just on NUMEN or just GPUs for that matter. Any other interesting projects like that MUVE project? And I know it's a bit futuristic, but has anybody done more in terms of that massive type of visual search, comparative visual search using NLP for visual search using using Gemini.

Lee-Lean Shu (Chairman, President, and CEO)

We look. Yeah. We look at With our just I mentioned, with our partner we extend the architecture advantage of Gemini architecture. We look at the one workload. We can be, if we have enough memory, we will be 10x faster than any solution exists today, okay? That's why we say, hey, we have this inherent advantage there. The thing is that we don't have enough viewing memory for that, okay? If we can combine for the future roadmap. If we can put enough memory into it, that's why you are looking for, you know, order of magnitude performance better than, you know, existing solutions.

Orin Hirschman (Managing Member and CEO)

On that note, a closing question, just in terms of the what nanometer geometry is being used for Gemini-I, Gemini-II, and what you're thinking for Gemini 3. Obviously, that will affect what you just discussed in terms of the ability to pack in memory and etc . Just if you can kind of tell us more about that, and then, just one follow-up, and that's it for me. Thank you so much.

Lee-Lean Shu (Chairman, President, and CEO)

Yeah. As today Gemini-I is a 28 nm, Gemini-II is a 16 nm. If we look at the future, okay, today's state-of-the-art GPU is a 4 nm. If we look at the future, then we do 5 nm, we build in the 3D memory in it, because the only way you can get the high capacity memory with reasonable bandwidth is the 3D memory. If we put in the 3D memory with the 5 nm, we will be order of magnitude better.

Orin Hirschman (Managing Member and CEO)

This is the follow-up question, which with understanding that in terms of Gemini 3, but knowing that Gemini-II is gonna be the platform coming up here shortly, I mean, the key platform. In terms of your ability to accelerate NLP, again, not visual, so I'll forget about that futuristic question. Here today, in terms of accelerating NLP applications and, you know, ChatGPT, etc, is Gemini-II got enough in it so that you're competitive/even superior on that type of application to, you know, a leading edge GPU, optimized GPU?

Lee-Lean Shu (Chairman, President, and CEO)

Yes.

Orin Hirschman (Managing Member and CEO)

Like a Hopper-style GPU? Have you passed that with Gemini-II? The question only is, can you leapfrog it even further? That's my last question. Thank you so much.

Lee-Lean Shu (Chairman, President, and CEO)

Just as I mentioned, there are two things, big memory capacity or big memory bandwidth, okay? We have one of them. If any workload can fit into our chip, we will be the best solution out there. There are many, many cases like that, okay? Even the ChatGPT, it doesn't have to be a humongous data set. It can be a smaller data set and the data set can fit into our chip. We will be a number one in the market for me.

Orin Hirschman (Managing Member and CEO)

Okay, great. Thank you so much.

Operator (participant)

As a reminder to register for a question, press the one followed by the four on your telephone keypad. The next question is from the line of Luke Boyne, private investor. Please proceed with your question.

Speaker 7

Hi. Good to be back. Hope you all are well. It's a very exciting announcement and development. Great to hear, you know, the comprehensive, you know, layout there. Just for, you know, really, yeah, kind of minor clarifications and, yeah, going a little bit broader with, yeah, the near term potential. Wondering if your Amazon Web Services server offering is capable of fielding, you know, say like a, just a broader range of companies of, potential, yeah, end-use cases that could more or less play around with your service without having to, yeah, go through a more complex processes of embedding or, you know, other integration processes?

Just, yeah, plug and play and see what you can do for their applications, especially thinking about vector search, but also, rich data like was mentioned, maybe for metaverse, maybe for dense registration, things like that. Yeah, wondering how you're seeing the potential to expand Amazon Web Services or a similar offering on, say, Azure or other clouds. Especially how that would relate to an earlier rollout of Gemini-II from your own facility, your own servers on those clouds.

Didier Lasserre (VP of Sales)

We've, we've started, as we've discussed in the past, we've started the integration with the OpenSearch. That's ongoing. It's really. We have already set up our own servers for that. We have some here in our Sunnyvale facility, some in our Israeli facility, and then we also have some at an offsite facility that's directly across the street from AWS West, and it's, you know, directly connected. We have that in place with the Gemini-I. You know, over time, obviously, you know, we would migrate those to Gemini-II. Those are in place. We do have some SAR demos that people can run off of those remotely. It's not set up yet to be able to do, you know, load your own data.

It's the data that, you know, sets that are already in there, which you can run. We're not at the point yet where you can, you know, enter your own data, at least not larger data sets. That is certainly the direction we're going. We're just not quite there yet.

Speaker 7

Do you have a timeline on, yeah, when you would be able to, yeah, roll out those interactive features and capacities?

Didier Lasserre (VP of Sales)

We're shooting for this year. You know, some of the examples you brought up is gonna be, you know, we're gonna get some help from this data science contractor that we have on board now. It's something we're trying to roll out second half of this year.

Speaker 7

Excellent. All right. That's all I have. Pipeline's loaded. Yeah, appreciate y'all.

Didier Lasserre (VP of Sales)

Thanks, Luke.

Operator (participant)

The next question is a follow-up from the line of Krish Sankar with TD Cowen.

Krish Sankar (Managing Director)

Hi. Yeah. Just wanted to see if you could give us an update on the ELTA SAR application and what's going on there?

Didier Lasserre (VP of Sales)

As you recall, we did the POC with them. It was a very broad POC. It could be used for different vehicles or vessels. It could be used at a lot multitudes of heights from 100 m to, you know, much, much higher, obviously, into space. The initial program they're looking at for us was just a single laptop, I guess you could call it that. You know, they had already been using a GPU, they're using the GPU still for that program. There's a follow-on program that they're looking at is for now, we're going through that process with them. It'll be.

It won't be another POC because we've already done one, but it'll be a kind of a bit of a different project than what we were working on with them. It'll still be under SAR, and it'll still be the same algorithm, so it should be a simple integration.

Krish Sankar (Managing Director)

Okay. Just wondering about, we've been waiting to hopefully get some space provenance on the rad-hard SRAM, and wondering if you guys have any visibility now on when that launch might happen or is it permanently scrubbed?

Didier Lasserre (VP of Sales)

No, it's not permanently scrubbed. We follow up. Yeah, I get your frustration because I'm with you on this one. So it's not scrubbed. There were multiple programs that they, when I say they, there was a few as we talked, defense contractors were using it. There have been a couple of the programs that have been scrubbed, but the larger ones we're looking at have not been scrubbed. They're certainly still active. They've just been pushing out the launch dates. We're just not getting a good feel for exactly when the next launch is gonna be. We know they were delayed because they couldn't get some critical components and now it's just a matter of, you know, getting them to actually do it.

The answer is, we're still optimistic about it. It's just the timing is elusive for us on when it's actually gonna happen.

Krish Sankar (Managing Director)

Can the European distributor kinda do anything on the rad-hard piece or are they stuck with just doing Radiation-Tolerant until you get that space provenance or is it a different approach in Europe?

Didier Lasserre (VP of Sales)

Oh, no, they're definitely gonna be going after everything now. The folks that we've already sent parts to that we're looking to get heritage, it's really just a heritage part. The heritage is a signal to the world that says your parts have been launched into space and they work. It's really a, it's an additional check mark and a box for a lot of these folks, but it doesn't change the fact that our parts are already internally qualified to work up there. We know they will work based off of the testing that we have done. This European distributor is gonna be finding additional opportunities for us.

I mean, the folks that we were looking to do the heritage for the short-term launches, those were U.S.-based companies. We have shipped some rad tolerant and at least one rad-hard to a European customer, but they were not the ones we anticipated to get us the initial heritage.

Krish Sankar (Managing Director)

Okay. All right. great. Any update on some of the scientific applications? Is, you know, Weizmann Institute come back for more boards or any, you know, analogous, type customers in, you know, pharma, med tech, biotech, universities, et cetera?

Didier Lasserre (VP of Sales)

Universities, yes. We're, we're candidly not spending a lot of time on that market. The revenue opportunities for the other markets we've discussed today are larger. We do have two universities that, well, let me think. Yeah, two. Yeah, there are two different applications for two different universities that are looking at them for genomics. They'll be essentially doing the algorithms and doing the write-up. Personally, we are not spending much effort ourselves. We've already done a plug-in specifically for the BIOVIA Tanimoto, and so it just doesn't make sense for us based off of our limited resources to spend more time developing more algorithms for more platforms. The revenue volumes there just aren't as great as they are in other markets we're addressing.

Krish Sankar (Managing Director)

Makes sense. Thanks.

Operator (participant)

There are no further questions at this time. I will now turn the presentation back to the hosts.

Lee-Lean Shu (Chairman, President, and CEO)

Thank you all for joining us. We look forward to speaking with you again when we report our first quarter fiscal 2024 results. Thank you.

Operator (participant)

That does conclude today's conference. We thank you for your participation and ask that you please disconnect your line.