Thursday, April 13, 2017

China fab toolmakers targeting advanced-node production

Naura Technology (formerly Beijing Sevenstar Electronics) has started shipping ion etch equipment for the manufacture of 14nm chips to chipmakers, while Advanced Micro-Fabrication Equipment (AMEC) is being engaged in the development of etch tools for the production of 5nm chips, according to industry sources.
Naura has also secured continued orders from Semiconductor Manufacturing International (SMIC), the largest China-based pure-play foundry, said the sources. SMIC has become an important client of Naura, which has already obtained orders for advanced-node manufacturing from the foundry's 12-inch fabs, the sources indicated.
SMIC plans to enhance its 28nm process variants to meet customers' various needs, while expanding production capacity at its 12-inch facilities. With China pusing its self-sufficiency rate for production of chipmaking equipment, Naura and other China-based fab toolmakers are being pinpointed as the major beneficiaries of SMIC's 12-inch fab expansion, the sources said.
Naura CEO Zhao Jinrong was quoted in previous reports as saying China's semiconductor equipment industry growth will be driven by the development of the country's homegrown IC industry supply chain. China's self-sufficiency rate for production of semiconductor equipment is still lower than 10%, but the proportion is expected to reach 30% within the next three years, according to Zhao.
Naura's sales generated from the semiconductor sector grew to CNY810 million (US$117 million) in 2016 from CNY520 million in 2015 - a 56.2% jump. The company expects to continue enjoying impressive revenue growth in 2017 driven by new orders.
AMEC is also among SMIC's major equipment suppliers in China. AMEC has been engaged in the development of 5nm etching tools for five years, and is expected to roll out the new product line at the end of 2017, according to industry sources. The availability of AMEC's 5nm etch equipment will be a milestone for China's homegrown chipmaking equipment industry.

http://www.digitimes.com/news/a20170412PD200.html

Wednesday, April 12, 2017

Samsung Electronics to Start Operating World’s Largest Semiconductor Factory

Samsung’s three-legged cluster in Korea connecting Giheung, Hwaseong and Pyeongtaek will be completed. As a mother fab that produces advanced NAND chips, Samsung Electronics’ Pyeongtaek plant will become the production base of the fourth-generation 3D NAND chips in 64-layer with the world’s best technology. The world’s largest semiconductor factory will start operating in July. With other competitors, such as Micron and SK Hynix, having not succeeded in mass producing the fourth-generation NAND chips yet, Samsung is planning to dominate the semiconductor market through the so-called “super-gap” strategy.
Samsung Electronics currently operates the system semiconductor plant in Giheung and the DRAM chip plant in Hwaseong. With the NAND flash chip plant in Pyeongtaek, the company has established a dream of semiconductor triangle and prepared the ground for the long-term seizure of power in the global semiconductor market. In addition, Samsung will expand its system semiconductor plant and DRAM plant in Giheung and Hwaseong as well as promote Pyeongtaek plant and the Xian plant in China as the production base of NAND flash chips.
Once the Pyeongtaek plant starts operation in earnest, Samsung’s NAND flash production will dramatically increase. According to market intelligence company IHS Markit, Samsung Electronics has the NAND production capacity of 450,000 wafers a month and more than half of them are 3D NAND memory chips. 3D NAND flash is a type of flash memory in which the memory cells are stacked vertically in multiple layers to increase capacity. As the Pyeongtaek plant is said to mass produce only 3D NAND chips, Samsung Electronics is highly likely to expand its share in the 3D NAND market which sees demand surge.
However, the total 3D NAND production of the Pyeongtaek plant in the second half of this year after starting operation will be 70,000 to 80,000 units as of wafers. An official from the semiconductor industry said, “In the initial stage of operation, production is limited. After one to two years, the rate of operation will hit its stride.” Accordingly, the Pyeongtaek plant is expected to account for 10 to 20 percent of Samsung Electronics’ semiconductor business for a while.
Samsung Electronics seeks to strengthen its unrivaled position in the rapidly growing NAND market by operating the Pyeongtaek plant as soon as possible. NAND is a memory chip that retains data even if power is turned off and its demand in mobile devices, including high-performance smartphones, is skyrocketing.
IHS Markit said that Samsung Electronics’ share in the NAND flash market increased from 32 percent in 2015 to 36.1 percent in 2016. An official from the semiconductor industry said, “When Samsung mass produces the fourth-generation NAND memory chips and lower the price of the second and third-generation NAND products, other producers will be hit hard. Samsung will be able to expand its market share to 40 percent even if other semiconductor companies join forces through the acquisition of Toshiba.”
In particular, Samsung has set up the fourth-generation NAND mass production system and started developing the fifth-generation NAND at the same time. During the annual shareholder meeting held at the end of last month, Samsung Electronics Vice Chairman Kwon Oh-hyun said, “We are planning to continuously widen the technical gap by developing an advanced process, including a fifth-generation V-NAND, in time.”
An expert in the semiconductor market said “Concerns over oversupply and expectations for new demands caused by the fourth industrial revolution co-exist in the market. Samsung’s market dominating power in the future will be determined by how much Samsung can differentiate its products and strategy through the Pyeongtaek plant.”

http://www.businesskorea.co.kr/english/news/ict/17805-three-legged-chip-cluster-completed-samsung-electronics-start-operating-world%E2%80%99s

Tuesday, April 11, 2017

AMD Brings In More Virtual Reality Capabilities By Acquiring Nitero's IP And Talent

AMD is pushing deeper into the virtual reality market, announcing on Monday it acquired the intellectual property and engineering talent from Nitero.
Nitero's technology for wireless virtual reality headsets will help AMD broaden its portfolio to create more immersive computing experiences, said Mark Papermaster, AMD chief technology officer, and senior vice president.
"Unwieldy headset cables remain a significant barrier to driving widespread adoption of VR," he said in a statement. "Our newly acquired wireless VR technology is focused on solving this challenge, and is another example of AMD making long-term technology investments to develop high-performance computing and graphics technologies that can create more immersive computing experiences."

The pricing and conditions of the acquisition were not revealed. AMD did not respond to a request for comment before publication.
Austin, Tex.-based Nitero approaches the augmented and virtual reality markets through its flagship product, a millimeter wave chip that uses high-performance 60 GHz wireless.
The chip has beamforming capabilities that help reduce the performance latency suffered by wireless devices, allowing the VR headsets to provide a more immersive experience, said AMD.
In addition to Nitero's IP, the company's co-founder and CEO Pat Kelly is joining AMD as vice president of Wireless IP. "Our world-class engineering team has been focused on solving the difficult problem of building wireless VR technologies that can be integrated into next-generation headsets," he said in a statement.
More chip manufacturers, like Intel and Qualcomm, are adopting virtual reality solutions as the demand for immersive computing dramatically increases. In November 2016, for instance, Intel snapped up virtual reality company Voke to boost its VR immersive experiences around sporting games.
AMD, for its part, hope to broaden its existing virtual reality portfolio with the acquisition. The Sunnyvale, Calif.-based company's existing virtual reality lineup includes its LiquidVR virtual reality technology, a software development kit that is compatible with VR products like Oculus, HTC Vive, and Sulon.
The acquisition comes as AMD prepares to launch an array of new 14nm FinFET graphics cards based on its new Vega architecture. The graphics cards, which are targeted for enthusiast consumers, will benefit from the company's new virtual reality technology, said an AMD partner.
"I think this is a good acquisition for AMD," said Andrew Kretzer, director of sales and marketing at Bold Data Technology, a Fremont, Calif.-based system builder. "[AMD's] new Vega product is set to ship soon and VR is a big part of that launch. Low latency and wireless are important keys to the augmented reality field, so the Nitero IP should help AMD compete down the road in this expanding arena."

http://www.crn.com/news/components-peripherals/300084503/amd-brings-in-more-virtual-reality-capabilities-by-acquiring-niteros-ip-and-talent.htm

Friday, April 7, 2017

http://semiengineering.com/lidar-completes-sensing-triumvirate/

Fully autonomous vehicles of the future will depend on a combination of different sensing technologies – advanced vision systems, radar, and light imaging, detection, and ranging (LiDAR). Of the three, LiDAR is now the costliest part of that equation, and there are worldwide efforts to bring down those costs.
Mechanical LiDAR units are currently available, priced in the hundreds of dollars. Those figures have to come down to make the volume adoption of the technology possible in the cost-conscious automotive industry.
Along with the cost factor, LiDAR vendors must be able to show the high performance and reliability of their products. It’s not good enough to have 99% reliability for advanced driver-assistance systems and automated driving. In the safety-critical aspects of automotive manufacturing, the equipment has to demonstrate the “six nines” – 99.9999% reliability.
The importance of advanced technology in automotive vehicles cannot be overstated. Intel’s proposed $15.3 billion acquisition of Mobileye, a vision systems vendor based in Israel, is a case in point. The chipmaker and Mobileye teamed up last year with BMW to collaborate on autonomous-vehicle technology.
LiDAR is a key component of that technology, and investors are opening their wallets for startups working on this technology. Blue-chip investors last month put $10 million into TetraVue, a LiDAR startup in Carlsbad, Calif. Investors include Foxconn, Nautilus Venture Partners, Robert Bosch Venture Capital, and Samsung Catalyst Fund.
Autonomic, a self-driving software startup located in Palo Alto, Calif., has raised around $11 million from Ford Motor and Social Capital. The four co-founders previously worked at Pivotal Labs.
Technology drivers
Technavio forecasts the worldwide automotive LiDAR sensors market will see a compound annual growth rate of more than 34% up to 2020. The market research firm estimates the automotive LiDAR market was worth $61.61 million in 2015, with most of the spending in the Europe/Middle East/Africa region and in the Americas.
The company has a report available, Global Automotive LiDAR Sensors Market 2016-2020, published last June, and it will be updating that report during the third quarter of this year.
“LiDAR technology in automotive industry is witnessing rapid evolution, both in terms of technical advancement and market dynamics,” says Siddharth Jaiswal, one of Technavio’s lead industry analysts for automotive electronics research.
Among the key developments cited by Technavio:
1. Cost reduction in an effort toward economies of scale. LiDAR manufacturers are working on reducing the cost of the system by employing efficient processing techniques, and in certain cases positioning products as per customer segments. “For instance, the price of the Velodyne LiDAR unit that is used on Google’s self-driving car is a 64-beam Velodyne HDL- 64E priced at $80,000,” Jaiswal said. “Velodyne also offers 32-beam and 16-beam LiDAR units at $40,000 and $8,000 respectively, which can be used for economical projects. We expect LiDAR technology to follow a similar path of ‘radar’ in the automotive industry, where cost played a crucial role in market adoption. Hence cost is a key focus area for the players.”
2. Compact design. Velodyne’s first LiDAR sensor, released in 2005 was so big and heavy—it weighed about 5 kilograms—that it had to be placed on the roof of the car. The weight is now less than a kilogram, and a solid-state version is compact enough to fit inside the car.
3. Sensor fusion. The technological trend of combining imaging sensors with LiDAR has been a popular technological research topic for over a decade. The data output becomes more reliable if the fusion results in confirming the output of one sensor by validating against the other sensor type. But if the validation doesn’t prove the results of one sensor against another, it makes the system unreliable.
4. Use of LiDAR beyond automobiles in road asset management. Traffic Speed Road Assessment Condition Surveys (TRACS) were introduced on the trunk road network in England in 2000. The U.K. Highways Agency conducts routine automated surveys of trunk road pavement surface condition under the TRACS survey. LiDAR is used to measure distances from the sensor head, and potentially can deliver measurements of objects much further from the survey vehicle than TRACS surveys.

“LiDAR is at a very lucrative position among the autonomous driving sensor suites,” said Jaiswal. “A 360-degree map is its key differentiator from other sensor technologies, and its capabilities with respect to detection of objects and even during the complete absence of light has set its place among OEMs. Also, the evident fall in price of the most expensive device of the autonomous vehicle, the LiDAR sensor unit, is likely to drive the adoption of automotive LiDAR sensors. For instance, Velodyne introduced in 2016 its new LiDAR sensor, the ULTRA Puck VLP-32A. It is claimed to be the most affordable LiDAR sensor capable of addressing vehicle automation levels 1-5 as defined by SAE, and is also very compact compared to the industry’s previous product versions. Because of the solid-state architecture, the sensor is small enough to be mounted on to exterior mirrors while extending the 3D sensing range to 200 meters (656 feet). Velodyne has set target pricing of less than $300 per unit in automotive mass production quantities—a significant cost reduction from the $7,900 per unit of Velodyne’s previous compact LiDAR.”
Moreover, LiDAR can be developed using mature semiconductor process technologies. technologies, and the solid-state version has no moving parts.
“LiDAR is perceived as a key technology for accurate 3D mapping, vehicle awareness, navigation,” said Pierre Cambou, imaging activity leader at Yole DĂ©veloppement. “First there is a race for performance and durability, through the use of short-wave infrared (SWIR) diodes, avalanche photodiode or single-photon avalanche diode. There is also a huge effort in cost reduction. This is mainly trying to make the LiDAR solid-state, through steerable lasers, MEMS micromirrors, or detector arrays.
But Cambou noted there are different approaches to autonomous driving, and LiDAR isn’t essential to all of them. “LiDAR is a fundamental piece of equipment for autonomous vehicles, which I would rather call robotic vehicles. There will be many levels of autonomy. LiDAR might be necessary for city autonomous emergency braking, probably in conjunction with radars and cameras. This multimodality approach is well-defined now. Nobody really questions it anymore.”
And LiDAR’s market will increase as prices drop, from about $300 million today to about $600 million over the next five years. “Today there are three entry points in automotive: $3,000, $300, and $30,” he said. “Cameras are currently at the $30 price point and LiDAR is at $3,000. The goal for the LiDAR players is to lower the cost and reach the $300 target without sacrificing too much of the performance. We will see such LiDARs entering the market, probably using solid-state approaches, in the next three years.”
That is a small fraction of the overall vision sensor market. “The consensus is there is almost the same revenue for automotive radar and automotive vision today, but vision is 50% forward ADAS and 50% park assist,” Cambou said. “We have reached $1 billion of automotive vision sensor value in 2016 and the growth is 24% CAGR. The horizon is $7.3 billion in automotive vision sensor revenue by 2021.”
Amin Kashi, director of ADAS and Automated Driving at Mentor Graphics, a Siemens business, said that interest in LiDAR began more than a decade ago due to the high cost of radar sensors at the time, which cost about $500 apiece. LiDAR sensors were extremely expensive then, at up to $260,000 per unit.
“Three years ago, you saw a number of companies or startups beginning to invest in and look into the LiDAR space,” Kashi said. “Every major Tier 1 somehow has started investing or acquiring companies in the LiDAR space.”
That includes companies such as Continental and TRW. Kashi previously worked at Quanergy Systems, which developed a mechanical LiDAR sensor and is working on a phased-array LiDAR sensor. Quanergy’s solid-state LiDAR sensor goes for about $250.
Meanwhile, Mentor Graphics, a Siemens company, is providing hardware, software, and design services to OEMs and Tier 1s addressing LiDAR. “We’re also providing software IP that their sensors can run. At the end of the day, all of the sensors have to somehow be fused. There needs to be a processing platform or system that takes all of this different information and makes it available for the decision engine. That’s where our interest is.”
Cameras, LiDAR, and radar are complementary to each other, providing redundancy for the deficiencies of each technology, he said. That’s critical because LiDAR can be less effective in fog and low clouds, dust storms, heavy rain, and heavy snow.
“You still have to have very good resolution for the sensors you use for your autonomous vehicles,” he noted. “There are a lot of companies working on LiDAR technology, a lot of startups, and they have very compelling concepts. The interesting thing is going to be is to see if the road to commercialization is going to be successful. Some of these are very imitative, but it’s a big challenge going from a great concept into an automotive-grade sensor. And there is a lot of investment associated with that.”
Making comparisons between the various LiDAR technologies isn’t always straightforward, though, and it’s not made any easier as competition heats up.
“There’s lots of misleading information out there,” said Louay Eldada, CEO of startup Quanergy. “You have people who do traditional mechanical LiDAR—big, spinning mechanical LiDAR that’s used in helicopters—and they call themselves hybrid solid-state because the semiconductor content is non-zero. That’s just deception.”
Such products have one small chip in a bucket-sized product, according to Eldada. “In the automotive space, no one is still using mechanical LiDAR. We believe strongly that our solid-state LiDAR is by far the most exciting development in this space.”
Quanergy last year received $90 million in Series B funding, bringing the total of its private funding to about $150 million and valuing the company at more than $1 billion. Delphi Automotive, GP Capital, Motus Ventures, Samsung Ventures, and Sensata Technologies invested in the Series B round.
XenomatiX, another startup, also focuses on solid-state LiDAR. “Startups are taking the lead in development that is considered to be essential for automated driving,” said Filip Geuens, CEO of the Belgium-based company. “There are huge investments and expensive acquisitions by some of the big guys to get the sensors and software required for automated driving. Most of these companies, technology-wise, are going in the same direction. We expect they will all hit essential hurdles. We are walking in a different direction and doing things slightly differently, because we believe this is the best way to overcome these hurdles.”
XenomatiX is trying to clear up sensing confusion among LiDAR systems, with many systems utilizing direct time-of-flight sending out one beam of light or one flash of light, Geuens said. “The direction we are taking is to send out thousands of beams at the same time. It’s quite a challenge. We are also heeding the eye-safety restrictions. That is the most important hurdle that’s the same for all of us. We’re sending out many beams at the same time, and that makes it even harder. The upside is it makes the system so much more reliable in real circumstances where multiple LiDAR systems are operating at the same time.”
Some companies assert that cameras and radar are sufficient for automated driving. Geuens doesn’t believe that. He said that driving a car involves a 3D world, and LiDAR is essential for sensing in all directions.
Market confusion
One big issue in the industry is the push-and-pull between the OEMs and the Tier 1s. OEMs traditionally expect Tier 1s to bring them the advanced technology they need, while Tier 1 companies need proven technology before presenting it to the OEMs. According to numerous industry insiders, the vendors of automotive components don’t want to spend massively on R&D without OEM commitments to volume purchase orders.
Intel’s pending purchase of Mobileye is “a big step forward” in bringing high-technology products to the automotive industry, Geuens said.
But the race toward autonomous vehicles, and the amount of technological innovation required to get there, is bending some of the previous approaches. “Right now, LiDAR technology as a whole is kind of morphing,” said Jean-Yves DeschĂȘnes, president of Quebec-based Phantom Intelligence. “That morphing is caused by the automotive industry.”
Five to 10 years ago, LiDAR was primarily used for architectural, mapping, and military purposes. The units were typically huge, unwieldly devices with many mirrors.
“A lot of people are looking for a solution,” he said. “Recent research and companies we hear a lot about right now are trying to replace those mirrors. We produce the scan LiDAR principle, by using MEMS mirrors, beam steering, whatever. A lot of mapping is going in that direction. We believe strongly at Phantom Intelligence that the solution lies more in flash LiDAR technology. Flash LiDAR is pretty much more of an analog to a 3D camera. Instead of having a narrow beam being geared to progressively sweep that field of view to recreate the image, you flash the image with laser pulse over a large surface and use multiple pixels to reconstruct the image.”
LiDAR’s disadvantage are the echoes coming back to the sensor, noted DeschĂȘnes, who favors what he calls more intelligent signal processing. He predicts there will be five levels of autonomous driving, with fully autonomous vehicles rolling out in 2025 and widespread adoption of the technology in 2030.
Reality check
Put in perspective, LiDAR is an well-known technology that has finally found a lucrative market application.
“The principle of LiDAR – the light sent through the pulse and echo of time-of-flight – has not really changed,” said one industry source. “The physics have not changed ever since its invention, for the past 40 years or so. The evolving changes are more in the components and system integration. There’s no fundamental principle change.”
Flash LiDAR has been in development for the past five years, the source noted, likening it to a CMOS image sensor. “This is an area to watch for—the flash LiDAR technology. It promises a very low cost of solution, not necessarily high performance.”
Kevin Watson, senior director of product engineering at Redmond, Wash.-based MicroVision, a publicly held company, disagrees. “I don’t think that’s going to go anywhere,” he said of flash LiDAR. “For many years, the Holy Grail of LiDAR sensors we thought to be a MEMS mirror-based laser scanner, because they’re super-small, relatively inexpensive to manufacture in great quantity, and very reliable. They’re small enough to hide several around an automobile.”
Watson calls LiDAR “the most important sensor” in automotive electronics. “Vision systems are great, but they’re a totally passive system. LiDAR is active.”
But LiDAR also has its limitations. Radar can recognize a wall and has a longer range and it also works in fog, while LiDAR and vision can be confounded. Achieving Level 4 autonomy, the next-to-highest level, is “a ways off,” said Watson, adding that may not be realized for a decade. “It’s a very, very tough problem. It’s just a lot of work.”

http://semiengineering.com/lidar-completes-sensing-triumvirate/

Thursday, April 6, 2017

IBM Cloud to offer NVIDIA’s advanced GPU accelerator, vowing to solve previously impossible data challenges

IBM says it will become the first major cloud provider to offer NVIDIA’s Tesla P100 graphics processing unit (GPU) accelerator worldwide on its cloud.
Making the announcement this morning, IBM said its cloud customers will be able to use the GPU accelerator for applications such as artificial intelligence, deep learning and and high-performance data analytics. The accelerator can support the types of intensive workloads required by high-throughput applications used in business sectors such as financial services, energy and healthcare.
“IBM’s new offering will provide organizations with near-instant access to Tesla P100 to test and run applications that have the potential to solve problems that were once unsolvable,” said Ian Buck, general manager of Accelerated Computing at NVIDIA, in a blog post today.
He added, “Tesla P100 and our GPU computing platform is already enabling customers to make breakthroughs in such diverse areas as fraud detection and prevention, genomic research into curing disease, eliminating millions of tons of waste through better inventory management, and automation of manufacturing tasks too dangerous for humans.”
GPUs are increasingly being used by all major cloud providers as a way to boost throughput and improve the performance of their cloud services.
“The latest NVIDIA GPU technology delivered on the IBM Cloud is opening the door for enterprises of all sizes to use cognitive and AI to address complex big data challenges,” said John Considine, IBM’s general manager of Cloud Infrastructure, in a statement. “IBM’s global network of cloud data centers, along with its advanced cognitive and GPU capabilities, is helping to accelerate the pace of client innovation.”
IBM is, of course, not the only major cloud company NVIDIA is working with. Last month, the company issued a joint announcement with Microsoft to unveil plans for a HGX-1 hyperscale GPU accelerator open-source design to be released in conjunction with Microsoft’s Project Olympus. In November, during the annual SC16 supercomputing conference in Salt Lake City, NVIDIA also announced work with Microsoft to help make it easier for enterprise customers to develop artificial intelligence applications to run on NVIDIA Tesla GPUs in the Microsoft Azure cloud.

http://www.geekwire.com/2017/ibm-cloud-adds-nvidia-accelerator-technology-for-high-throughput-and-ai-applications/

Wednesday, April 5, 2017

Qualcomm, NXP receive antitrust approval


Smartphone chipmaker Qualcomm Inc has received approval from U.S. antitrust regulators for its proposed $47 billion acquisition of NXP Semiconductors NV, Qualcomm said in a statement on Tuesday.
The waiting period required for companies under the Hart-Scott-Rodino Antitrust Improvements Act has expired, the company said.
Additionally, Qualcomm said it is extending its cash tender offer for all outstanding shares of NXP. The tender offer is now scheduled to expire at 5:00 p.m. EDT on May 2, 2017.
Qualcom's acquisition of NXP will be the biggest-ever in the semiconductor industry. The acquisition is also expected to help Qualcomm, which provides chips to Android smartphone makers and Apple Inc, reduce its dependence on a cooling smartphone market (Reporting by Lauren Hirsch; Editing by Dan Grebler)

http://www.reuters.com/article/nxp-semicondtrs-qualcomm-regulatory-idUSL2N1HC26R

Tuesday, April 4, 2017

Apple, Amazon, Google join bidding for Toshiba chip unit: media


Apple Inc (AAPL.O), Amazon.com Inc (AMZN.O) and Google have joined bidding for Toshiba's (6502.T) NAND flash memory unit, vying with others for the Japanese firm's prized semiconductor operation, the Yomiuri Shimbun daily reported on Saturday.
Toshiba shareholders on Thursday agreed to split off its NAND flash memory business, paving the way for a sale to raise at least $9 billion to cover U.S. nuclear unit charges that threaten the conglomerate's future.
The Yomiuri newspaper said bidding prices from Apple, Amazon or Google, owned by Alphabet Inc (GOOGL.O), were not known.
The Nikkei business daily reported on Friday that U.S. private equity firm Silver Lake Partners SILAK.UL and U.S. chipmaker Broadcom Ltd (AVGO.O) have offered Toshiba about 2 trillion yen ($18 billion) for the unit.
About 10 potential bidders are interested in buying a stake in the microchip operation, a source with knowledge of the planned sale told Reuters earlier.
Suitors include Western Digital Corp (WDC.O), which operates a chip plant with Toshiba in Japan, Micron Technology Inc (MU.O), South Korean chipmaker SK Hynix Inc (000660.KS) and financial investors.

http://www.reuters.com/article/us-toshiba-accounting-chip-sale-idUSKBN17335J