Thursday, March 31, 2016

Engineers Develop Open-Source Microprocessor for Wearables and IoT

Researchers from ETH Zurich and the University of Bologna have developed an open-source microprocessor system that maximizes the freedom of other developers to use and change the system. Open-source hardware products are abundant – think microcontroller boards like Arduino and Raspberry Pi—but these boards are based on commercial chips, whose internal structure is not open-source.
“It will now be possible to build open source hardware from the ground up,” said Luca Benini, professor at ETH Zurich, who worked on the project. “In many recent examples of open-source hardware, usage is restricted by exclusive marketing rights and non-competition clauses. Our system, however, doesn’t have any strings attached when it comes to licensing.”
The team made the processor compatible with an open-source instruction set – RISC-V – developed at the University of California in Berkeley, which makes the arithmetic instructions that the microprocessor can perform also open source.
A series of PULP microprocessor chips on a tray. (Image Credit: ETH Zurich / Frank K. Gürkaynak) A series of PULP microprocessor chips on a tray. (Image Credit: ETH Zurich / Frank K. Gürkaynak)
The processor is called PULPino and is designed for battery-powered devices with extremely low energy consumption, such as chips for small devices like smartwatches, sensors for monitoring physiological functions, and IoT sensors.
Currently in the lab, Benini is using the PULPino processor to develop a smartwatch equipped with electronics and a micro camera that can analyze visual cues and then determine a user’s location. Benini and the team are developing the smartwatch with the intention that one day it could control home electronics, however, it has been a challenge to accommodate all of this on a tiny space on a microprocessor with only a few milliwatts of power since the computing capacity for the image analysis must be sufficiently large.
Benini is also using PULPino in other research projects in conjunction with Swiss and European research institutions. “Until now, such research projects came about mainly as a result of personal contacts, and the partners had to negotiate a separate license agreement for each project. PULPino is now more easily available. We hope that there will be more collaborations in the future and that these will also be easier,” said Benini.
The scientists want to work with other project partners to jointly develop academically interesting extensions to PULPino, which would also be open-source, allowing the number of the hardware’s functional components to increase.
PULPino also has the potential to benefit smaller European corporations. “The production of microchips has become cheap in recent years because semiconductor manufacturers have built up large production capacities that they must use,” said Benini. However, the design process proves to be expensive for SMEs, especially when it comes to designing a complex chip from scratch. With an open-source piece of hardware like PULPino, new partnerships could be formed within the industry to develop novel chip components.

http://electronics360.globalspec.com/article/6488/engineers-develop-open-source-microprocessor-for-wearables-and-iot

Wednesday, March 30, 2016

HPE Intros 'Persistent Memory,' Combining DRAM Speed With NAND Flash Persistence

Hewlett Packard Enterprise on Tuesday officially rolled out a new type of server memory it said combines the performance of DRAM with the persistence of traditional SSDs or spinning disk.
The new technology, dubbed persistent memory, is scheduled to be available starting in early April, initially as an option in new versions of HPE's ProLiant Gen9 DL360 and DL380 servers -- possibly featuring new Intel Broadwell processors -- which the company is slated to introduce this Thursday.
The unveiling of persistent memory came via a meeting between HPE and a small group of journalists and analysts, including CRN.
[Related: HPE Unveils Hyper-Converged Infrastructure Based On ProLiant DL380 Servers]
With persistent memory, HPE is combining standard DRAM along with NAND flash memory and a micro controller with an integrated battery on a module that fits in a standard memory slot, said Tim Peters, HPE's vice president and general manager for ProLiant rack servers, server software and core enterprise solutions.
In its first implementation, the NVDIMM, which is short for "non-volatile DIMM," will pair 8 GBs of DRAM for pure speed with 8 GBs of NAND flash for persistence, Peters said. Future versions will be available in different capacity points, he said.
HPE 8-GB NVDIMM modules will be list priced at $899. This compares with about $249 for a standard RDIMM module, he said.
The NVDIMM modules will be first available with updated Gen9, or ninth-generation, ProLiant DL360 and DL380 servers slated to be unveiled Thursday, Peters said. The servers support up to 16 NVDIMMs per server.
HPE did not go into much detail about the updated ProLiant DL360 and ProLiant DL380 servers. But one of the presenters let slip that the new servers might include the new Intel Broadwell Xeon E5-2600 v4 processors. Peters said, however, that that is assuming that there is such a processor, after an HPE spokesperson noted that Intel does not like partners talking publicly about unreleased products.
Intel did not respond to a request for more information about the timing of the release of its new Broadwell processors by publication time.
NVDIMM is going to be an amazing new option for business-critical infrastructures, said Dan Molina, chief technology officer at Nth Generation Computing, a San Diego-based solution provider and longtime HPE partner.
"This will be key to running important data in memory and knowing that the data can't be lost," Molina told CRN.
Customers with mission-critical applications such as Oracle and Microsoft SQL databases would like to use persistent memory to run those applications, Molina said.
"If they are run in memory that acts like storage, they will run dramatically faster," he said. "Customers could also fit part of an application like transactional logs in persistent memory, which would still provide important performance benefits."
NVDIMM persistent memory is less a product than it is an entire system, given that it was developed using industry standards and in conjunction with IT communities and technology partners, Peters said.
It was designed to help customers solve the challenge of making fast decisions based on data while ensuring that such data is not lost, and as such is a way to maximize the value of the data, he said.
"With the convergence of memory and storage, you can take advantage of [that value]," he said.
With traditional DRAM, if the memory loses power, the data inside is lost, Peters said. But because of the integration of DRAM and NAND flash, a loss of power will not lead to lost data. "We're combining the speed of memory with the persistence of storage -- hence, the name persistent memory," he said.
With persistent memory as a tier of storage, the IT bottleneck shifts from the hardware to the software, Peters said.
Based on HPE benchmarks, persistent memory provides 34 times the number of IOPS compared with standard SSDs, with 16 times the bandwidth and 81 times lower latency, he said.
NVDIMM will not be available as an option for previously sold servers, but will be available across new servers in the future, including HPE Synergy composable infrastructure solutions, Peters said.

http://www.crn.com/news/data-center/300080186/hpe-intros-persistent-memory-combining-dram-speed-with-nand-flash-persistence.htm

Tuesday, March 29, 2016

IBM wants to accelerate AI learning with new processor tech

Deep neural networks (DNNs) can be taught nearly anything, including how to beat us at our own games. The problem is that training AI systems ties up big-ticket supercomputers or data centers for days at a time. Scientists from IBM's T.J. Watson Research Center think they can cut the horsepower and learning times drastically using "resistive processing units," theoretical chips that combine CPU and non-volatile memory. Those could accelerate data speeds exponentially, resulting in systems that can do tasks like "natural speech recognition and translation between all world languages," according to the team.
So why does it take so much computing power and time to teach AI? The problem is that modern neural networks like Google's DeepMind or IBM Watson must perform billions of tasks in in parallel. That requires numerous CPU memory calls, which quickly adds up over billions of cycles. The researchers debated using new storage tech like resistive RAM that can permanently store data with DRAM-like speeds. However, they eventually came up with the idea for a new type of chip called a resistive processing unit (RPU) that puts large amounts of resistive RAM directly onto a CPU.
Google's Deepmind AI topples Go champ Lee Seedol

Such chips could fetch the data as quickly as they can process it, dramatically decreasing neural network training times and power required. "This massively parallel RPU architecture can achieve acceleration factors of 30,000 compared to state-of-the-art microprocessors ... problems that currently require days of training on a datacenter-size cluster with thousands of machines can be addressed within hours on a single RPU accelerator, " according to the paper.
The scientists believe its possible to build such chips using regular CMOS technology, but for now RPU's are still in the research phase. Furthermore, the technology behind it, like resistive RAM, has yet to be commercialized. However, building chips with fast local memory is a logical idea that could dramatically speed up AI tasks like image processing, language mastery and large-scale data analysis -- you know, all the things experts say we should be worried about.

http://www.engadget.com/2016/03/28/ibm-resistive-processing-deep-learning/

Monday, March 28, 2016

NXP taps to foster China's semiconductor usage market

"Tap-to-pay" is set to become new favorite as Apply Pay, Samsung Pay, Mi Pay and Huawei Pay have launched it in sequence to rival the nation's mobile payment market.
Taking Mi Pay, introduced by Chinese smartphone maker Xiaomi in its new flagship Mi 5, as an example, although the Near Field Communication (NFC) solution provided by NXP Semiconductors remains at the early stage for extensive adoption, the collaboration has shown the company's customer-oriented localization strategy and the ability to resources integration
Being one of the main driving forces behind the scenes of the NFC and secure element solutions, the Dutch manufacturer's technologies are optimally designed to address specific transit use cases and ensure a seamless consumer experience using augmented Radio Frequency performance and security.
Zheng Li, NXP's senior vice-president and China CEO, sat down with China Daily to call for more cross-border collaborations between upstream and downstream of the supply chain in the semiconductor industry and elaborate on the implementation footprints of NXP's localization strategies.
"By 2020, the shipment volume of smartphones with NFC function will reach to 2.2 billion units," said Zheng, citing statistics from market consulting institute IHS Technology.
According to Zheng, the fundamental drive to push the implementation of NFC technology is from the continually increased demand of the mobile Internet users who require upgraded technologies, better usage experience and privacy protection.
"As of 2015, the market scale of China's third-party mobile payment reached 16.36 trillion yuan ($2.51 trillion), and people have gradually got used to use mobile devices, rather than PC or laptop, to purchase goods," said Zheng. "While more challenges pop up for mobile carriers as the increase of user base and data security issues."
As a leading technology provider in China's Automatic Fare Collection market since its rollout in the 1990s, NXP has been driving the migration from card to mobile.
"Thanks to our industrial globalization, we will provide more high-end international resources and technologies to jointly work with smarphone makers, mobile carriers and banks to promote the usage of NFC," Zheng added.
It is reported that there are currently 400 million transit cards in circulation in China, and many card holders are expected to transition to more convenient and secure payment solutions, such as mobile payments via smartphones.
China's mobile transaction market has seen robust growth in recent years. According to People's Bank of China's statistics, 4.5 billion mobile transactions worth 18 trillion yaun ($2.76 trillion) occurred in Q3 2015, up 253 percent and 194 percent, respectively. As China continues to urbanize over the next decade, the market for smart payment technology in urban transit systems is expected to rapidly increase.

http://www.chinadaily.com.cn/bizchina/tech/2016-03/28/content_24142492.htm

Friday, March 25, 2016

China Moves to Contend in Chip Making

see the article on the Wall Street Journal:

http://www.wsj.com/articles/china-moves-to-contend-in-chip-making-1458851538

Thursday, March 24, 2016

U.S. Spurs Industrial Chip Market Growth

SAN FRANCISCO—The industrial semiconductor market is expected to grow at a compound annual growth rate (CAGR) of 8% between 2014 and 2019, when it is projected to be worth $59.5 billion, according to a forecast from market research firm IHS Inc.
IHS (Englewood, Colo.) predicts that increased capital spending and continued economic growth, especially in the U.S., will spur demand for industrial semiconductors. The firm lists commercial aircraft, LED lighting, digital video surveillance, climate control, traction and medical devices as the drivers for most of the global demand for industrial ICs.
IHS estimates that the United States remains the largest market for industrial semiconductors, accounting for 30% of all chips used in industrial applications in 2015. China was the second largest industrial IC market, accounting for 16% of the global total, according to IHS.
Robbie Galoso, IHS
Robbie Galoso, IHS
“Robust economic growth and increased capital spending in the United States is good news for industrial semiconductor suppliers, because they have the world’s largest industrial equipment makers, including General Electric, United Technologies and Boeing,” said Robbie Galoso, associate director for industrial semiconductors at IHS Technology, in a statement. “Strong industrial equipment demand will further boost sales of optical semiconductors, analog chips and discretes, which are the three largest industrial semiconductor product segments.”
LED lighting is expected to be the largest driver of industrial semiconductor growth. The LED market is forecast to be worth $14.5 billion in 2019, thanks to a boom in global LED lighting, IHS said.
The microcontroller market is expected to grow from $4.4 billion in 2014 to $6.3 billion in 2019, thanks largely to advances in power efficiency and integration, IHS said.
IHS projects that analog application-specific ICs will grow strongly through 2019, reaching $4.7 billion in industrial markets, especially in factory automation, power and energy and lighting. IHS expects growth to come primarily from power management products and device integration from firms such as Texas Instruments Inc., Analog Devices Inc., NXP Semiconductors NV and others.
The market for discrete power transistors, thyristors, rectifiers and power diodes is expected to grow to $7.8 billion in 2019, due to the policy shift toward energy efficiency in the factory automation market, IHS said.

http://www.eetimes.com/document.asp?doc_id=1329272

Wednesday, March 23, 2016

Moore's Law Stutters at Intel

Intel has announced that it’s moving away from its current “tick-tock” chip production cycle and instead shifting to a three-step development process that will “lengthen the amount of time [available to] utilize... process technologies.”
For years now, Intel has run its chip business on a ‘tick-tock’ basis: First it develops a new manufacturing technique in one product cycle (tick!), then it upgrades its microprocessors in the next (tock!).
But recently it’s been struggling to keep pace. The last advance in Intel’s chips was to move to a design created using 14 nanometer transistors aboard its Broadwell processors, which given the tick-tock cycle we’d expect to be miniaturized in 2016. But last year Intel was forced to announce that its 2016 chip line-up, called Kaby Lake, would continue to use 14 nanometer processes. Instead, the next shrinkage would arrive in the second half of 2017, when Intel said it would shift to transistors that measure just 10 nanometers in its Cannonlake chips.
Now, in an annual report filing, Intel has officially announced that it’s moving away from the tick-tock timing. Instead, it will run on a three-step development process that it refers to as “Process-Architecture-Optimization.” From the filing:
As part of our R&D efforts, we plan to introduce a new Intel Core microarchitecture for desktops, notebooks (including Ultrabook devices and 2 in 1 systems), and Intel Xeon processors on a regular cadence. We expect to lengthen the amount of time we will utilize our 14nm and our next generation 10nm process technologies, further optimizing our products and process technologies while meeting the yearly market cadence for product introductions.
While it doesn’t explicitly refer to timescales, the news suggests that Moore’s Law—which states that the number of transistors on an integrated circuit doubles every two years—is stuttering at Intel. The size of the transistor, of course, dictates the number you can squeeze onto a chip. Indeed, last summer Intel’s CEO Brian Krzanich mused that “the last two technology transitions have signaled that our cadence today is closer to 2.5 years than two.”
All of this is, of course, the result of the original “tick” becoming increasingly difficult to bring about: The limit of what can be done with conventional silicon is fast being approached. IBM has announced that it can create 7-nanometer transistors, but it’s a new technique using silicon-germanium in the manufacturing process rather than pure silicon, and at any rate the process is a way off being fully commericialized.
The truth is that we may just have to start waiting a little longer for faster silicon, for now at least.

http://gizmodo.com/moores-law-stutters-as-intel-switches-from-2-step-to-3-1766574361

Tuesday, March 22, 2016

Google Seeks To Sell Prominent Robot Maker Boston Dynamics

oogle's parent company is reportedly shopping a high-profile robotics subsidiary after just more than two years.
Bloomberg, citing two people familiar with Alphabet's plans, indicated that company officials determined that Boston Dynamics was unlikely to generate revenue in coming years — a key emphasis of the tech giant’s research strategy — and decided to cut it loose.

Boston Dynamics was one of several robotics acquisitions by Google in recent years. The subsidiary generated significant buzz with online videos, including footage of a robot dog training with the Marines and of researchers effectively beating up their latest humanoid robot.
The division, however, reportedly struggled to find proper leadership and repeatedly clashed with officials from Replicant, Google's robotics initiative.
(Image credit: Boston Dynamics)
Boston Dynamics subsequently wasn't rolled into the Google X innovation arm, and Google X officials expressed particularly discomfort about the humanoid video last month.
Bloomberg reported that Toyota's research arm and Amazon.com could be possible destinations for the company.


http://www.manufacturing.net/news/2016/03/google-seeks-sell-prominent-robot-maker-boston-dynamics?et_cid=5190680&et_rid=490548696&location=top&et_cid=5190680&et_rid=490548696&linkid=http%3a%2f%2fwww.manufacturing.net%2fnews%2f2016%2f03%2fgoogle-seeks-sell-prominent-robot-maker-boston-dynamics%3fet_cid%3d5190680%26et_rid%3d%%subscriberid%%%26location%3dtop

Monday, March 21, 2016

The Microchip That Made Silicon Valley—and All Modern Technology—Possible

All modern technology comes down to one tiny piece of oxidized silicon cemented atop a circuit. Without the pioneers who created it, future founding fathers such as Bill Gates and Mark Zuckerberg would have had nothing upon which to build their empires. Newsweek explores the phenomenon of the microchip in this article excerpted from a new Special Edition, The Founding Fathers of Silicon Valley, Exploring 60 Years of Innovation, by Issue Editor Alicia Kort.

As the inventions and innovations springing out of Silicon Valley become more ingrained in our lives, San Francisco has become the most expensive city in which to start a business in America, surpassing both New York City and Los Angeles in office rental prices. According to Forbes, technology services and electronics are the second and fourth most profitable industries in the U.S., respectively, and tech start-up culture has become familiar enough to be lampooned in the media and on HBO. This was, obviously, not always the case.
In the 1930s, the Bay Area (along with the rest of America) was devoid of tech companies. The main attraction along the coast of what was then called “The Valley of Heart’s Delight” was the miles of fruit trees that lined the roads, providing work for many young immigrant families.
When the Depression hit, Heart’s Delight was devastated. The fruit industry struggled more than most until the overall economy saw an uptick during World War II. After years of downturn, Heart’s Delight suddenly became a great place to put weapons technology companies—in part thanks to the proximity of professors at universities like Stanford. A group of technology-based companies moved west and settled in the fertile Valley, ready to grow a new industry from the ground up.
Back east in postwar New York and New Jersey at Bell Labs—founded by Alexander Graham Bell—a trio was working on an invention that would be the first major step in creating machines that were both more accessible and more powerful than the ones used by the Allies during World War II. William Shockley, John Bardeen and Walter Brattain were searching for an alternative to vacuum tubes, which were constantly overheating while supplying energy to the machines. While Shockley was out of the office, Bardeen and Brattain solved the problem when they realized silicon was the perfect medium for conducting electrons, and discovered they could turn the flow of electricity on and off by using a circuit. They called their creation the transistor. Bell Labs filed a patent immediately, but Shockley was not included on it.
A temperamental, headstrong researcher, Shockley was furious. He shunned his two teammates and decided to work alone on a transistor that could be sold commercially. He unveiled his creation, dubbed the junction transistor—which had a “sandwich” structure with positive electrons sandwiching the negative electron between them—in 1951. Finally receiving the attention he desired, Shockley left Bell Labs in 1956 to move to Mountain View, California, to start his own endeavor: Shockley Semiconductor Laboratory.
The location was chosen mainly out of convenience (Shockley’s mother was ill and lived in the area), but it would have a major impact on the region and the industry. “It was the 13th of February 1956 when Beckman and Shockley signed the formal agreement to do this, and the announcement was made the next day, February 14,” said David Laws, curator at the Computer History Museum, about the date Silicon Valley was founded. “That was really the defining moment, when it was decreed the silicon device would be made in Silicon Valley.”
After it was official, Shockley hired eight Ph.D. researchers, including Robert Noyce, Jean Hoerni and Gordon Moore, to investigate if silicon could be used as a material in semiconductors. Not long into their new jobs at Shockley Semiconductor, the eight researchers noticed that, after Shockley won his Nobel Prize, their boss had become erratic and paranoid. They couldn’t stand working with him, so much so that they went to Arnold Beckman and declared they would only stay if Shockley was pushed out of the company. Beckman sided with his partner, and the “Traitorous Eight” made good on their word and sought out a new opportunity. Sherman Fairchild, owner of Fairchild Aircraft and Fairchild Camera, was the one who presented it. He told the group they should start their own company, and he would front the money. Thus, Fairchild Semiconductors was born. It was a landmark hire.
In 1957, despite the fact that transistors had replaced vacuum tubes, solving a problem which had been plaguing engineers for years, there was a larger obstacle ahead. Transistors had to be carefully hand wired and soldered together, and if any wire was out of place, the transistor would not work. Each wire represented a “yes” or “no” switch: In order to solve a complex problem or program a task, there needed to be more “yeses” and “nos,” so more and more wires were used. Transistors were also made of three parts that always needed to be wired together: the emitter, base and collector. In order to solve more complex problems, contrary to the slimming and shrinking of devices we see so often today, computers were getting larger. But even the biggest machines were only so big. The idea of the personal computer had already been conceived, but it agonizingly could not be executed due to this “tyranny of numbers.”
The entire world was racing to solve the problem, and two teams of engineers reached the finish within months of each other: Fairchild Semiconductor, led by Robert Noyce, and Texas Instruments, led by Jack Kilby.
Shortly after starting his job at TI, Kilby was relatively alone in the office, the rest of the staff taking advantage of summer vacation. It seemed silly to Kilby to separate the emitter, base and collector parts of the transistor and then attempt to wire them together. He realized they would be more powerful and stable on one piece of silicon, but he still had to figure out how to incorporate the transistor into the chip.
Meanwhile, Fairchild’s Jean Hoerni had this exact same breakthrough. “Jean Hoerni invented a way of making integrated circuits, called the planar process, which absolutely revolutionized the building of semiconductors,” says Laws. “It turned it from a handcrafted, one-at-a-time operation into a mass highballing production and continues to be the essence of the way chips are made today, 60 years later.”

The planar process used a thin layer of oxidized silicon on top of the circuit to cement the chip into place. The wires could then be put in place like candles on top of a cake, instead of painstakingly soldered one by one.
Months later, Noyce and Kilby made the discoveries that would complete the integrated circuit, but Texas Instruments was much quicker to submit the idea to the Patent Office. Of course, Fairchild fought to get credit. Noyce was ultimately awarded the patent, on the grounds that he had better drawings and had produced it on a machine (rather than by hand like Kilby did). Still, the patent victory was more symbolic than monetary in nature. Both companies were seeing unheard of profits for their computer chips. Kilby continued to work at Texas Instruments for decades more, while Noyce went on to found Intel and create another landmark technology, the microprocessor.
But Kilby and Noyce’s original innovations, these little computer chips, have changed very little in the last 60 years. They power your smartphone, computer and even your kitchen appliances. Without the pioneers who created them, future founding fathers such as Bill Gates and Mark Zuckerberg would have had nothing upon which to build their empires.

http://www.newsweek.com/silicon-valley-microchip-modern-technology-437992

Friday, March 18, 2016

Multi-Beam Market Heats Up

The multi-beam e-beam mask writer business is heating up, as Intel and NuFlare have separately entered the emerging market.
In one surprising move, Intel is in the process of acquiring IMS Nanofabrication, a multi-beam e-beam equipment vendor. And separately, e-beam giant NuFlare recently disclosed its new multi-beam mask writer technology.
As a result of the moves, the Intel/IMS duo and NuFlare will now race each other to bring multi-beam mask writers into the market. Still in the R&D stage, these newfangled tools promise to speed up the write times for next-generation photomasks, although there are still challenges to bring this technology into production.
Intel, for one, sees a need for this technology. “Intel is completing its acquisition of IMS Nanofabrication,” according to officials from Intel. “This transaction will allow IMS Nanofabrication to focus on and accelerate the development of [multi-beam mask writing], which is a critical technology for securing the extension of Moore’s Law well into the future.”
To be sure, though, Intel’s move to acquire IMS took the industry by surprise. For one thing, Intel has invested in equipment suppliers in the past, but the company hasn’t bought a tool vendor outright in recent memory.
With IMS, Intel has taken a step into the equipment business. But it’s unlikely that the chip giant will make other acquisitions in the sector in the near term, according to analysts. The IMS deal is perhaps a one-time strategic move in a critical area, analysts said.
In any case, the move also represents Intel’s latest investment related to extreme ultraviolet (EUV) lithography. Indeed, Intel and others are placing huge bets on EUV. And to ensure the EUV infrastructure is ready, Intel and others have invested in various companies, such as ASML as well as Inpria, an EUV resist developer.
IMS’ technology is also expected to play a role for both EUV and optical masks. For years, Intel and other leading-edge chipmakers have produced photomasks within their own, internal mask shops.
Mask makers use traditional single-beam e-beam tools to pattern a photomask. But recently, e-beams have been struggling to keep up with complex masks.
In response, Intel and Photronics invested in IMS in 2011. In addition, Intel, DNP, Photronics and TSMC are collaborating on an effort to accelerate IMS’ multi-beam mask writer tools in the market. Still in the R&D stage, IMS’ technology makes use of multiple beams, which in theory will accelerate the write times in mask production.
Reports surfaced, however, that IMS recently fell behind schedule with its tool program amid technical issues, according to multiple sources. So to help IMS get back on track, Intel moved to secure the technology by acquiring IMS, sources said.
Meanwhile, under the terms of the deal, IMS will operate as a subsidiary of Intel. IMS will continue its “collaborative efforts and supporting their other industry customers,” according to Intel, which declined to comment further on the deal. IMS also declined to comment.
As part of Intel, though, IMS may encounter some difficulties in terms of selling its tools to Intel’s competitors that have mask shops, such as GlobalFoundries, Samsung, SK Hynix, SMIC, Toshiba and TSMC. That could open the door for NuFlare and its multi-beam tool. On the other hand, photomask makers welcome the idea of having two strong multi-beam vendors in the market.
All told, mask makers have high hopes for multi-beam mask writers. “Because everybody thinks multi-beam mask writing will happen, people are investing more as a result of the technology,” said Aki Fujimura, chief executive of D2S, in a recent interview. “The OPC community can do more creative things without the constraint of the mask being able to write in a reasonable time.”
The first multi-beam mask writers could move into early production by late 2016. These tools are expected to move into high-volume production by 2018, according to a survey from the eBeam Initiative.
Still, the questions are clear. Will multi-beam mask writer technology work as advertised? And what does it bring to the party?
Why multi-beam?
E-beams are used in the production of photomasks. Basically, a photomask consists of a chrome layer on a glass substrate.
In the flow, the photomask is patterned based on the specs of a given IC design. The mask, in turn, becomes a master template for that design. After a mask is patterned, it is shipped to the fab. The mask is placed in a lithography tool. The tool projects light through the mask, which, in turn, patterns the images on a wafer.
There are two types of systems that pattern the features on a mask—e-beams and laser-based pattern generators. E-beams are used to pattern critical layers, while laser-based tools are geared for non-critical layers.
Single-beam e-beam tools are based on variable shape beam (VSB) technology. In VSB, two shaped apertures are used to form a triangular or rectangular beam.
Not long ago, the e-beam could pattern a mask with ease. But recently, it’s become a different story. “Mask making is becoming increasingly difficult,” Fujimura said. “The big point is that complexity is growing for mask makers.”
As before, the lithography type determines the mask specs. Today, chipmakers are extending 193nm wavelength lithography to 16nm/14nm and beyond.
To deal with the diffraction issues, mask makers must use various reticle enhancement techniques (RETs). One RET, called optical proximity correction (OPC), is used to modify the mask patterns to improve the printability on the wafer. OPC makes use of tiny assist features on the mask. And the features are getting smaller and more complex at each node.
Mask makers are also moving towards inverse lithography technology (ILT). ILT boosts the pattern fidelity on the mask. But it also involves the creation of more complex curvilinear features on the mask.
As the mask becomes more complex, photomask makers are seeing an increase in write times. Writes times—the key metric in mask production—determine how fast an e-beam can write a mask layer. Write times are dependent on the number of e-beam shots required to pattern a mask layout. If a mask is complex, it requires more shots.
From 2001 to 2005, e-beam write times were 8 hours per mask set. In 2015, the average mask write times were 9.6 hours, according to the eBeam Initiative survey. The write times for more complex masks range from 18 to 72 hours today, according to the survey.
Since 2011, the write times have increased by 25%, due to mask complexity. This, in turn, impacts mask turnaround times and cost.
Mask makers have found ways to solve these issues. For example, chipmakers moved to multiple patterning starting at 20nm. In multi-patterning, the mask is split into two or more mask layers.
To solve the problem, some mask shops simply buy more e-beams. In some cases, they are simultaneously utilizing two e-beams on the same mask. The e-beams write different layers in parallel to reduce the cycle times and cost.
There are other ways to reduce write times. Over the years, e-beam vendors have made their systems faster by increasing the current densities in a tool. NuFlare, for example, has significantly boosted the current densities in its single-beam tool over the last decade, from 70A/cm2 in 2006 to 1,200A/cm2 today.
“It’s amazing what the e-beam can do today,” said Franklin Kalk, executive vice president of technology at Toppan Photomasks. “Shaped beam tools are accurate. They have placement accuracies in the low single digits now. The CD accuracies are almost immeasurable.”
But in some respects, today’s single-beam e-beams have hit a physical limit. “The end of current density scaling for shaped beam is about 1200A/cm2,” Kalk said. “Beyond that, you have to go to multi-beam.”
It’s possible to develop a tool beyond 1200A/cm2. But as the current densities increase beyond a certain figure, the shot size becomes too small. “At some point, that doesn’t really help you anymore,” Kalk said.
In fact, NuFlare’s latest tool, called the EBM-9500, has a current density of 1,200A/cm2. This tool, according to NuFlare, represents the company’s last single-beam system.
All told, single-beam e-beam is expected to run out of steam at 7nm or 5nm, according to analysts. Then, mask makers will require multi-beam mask writers, especially for EUV photomasks.
For example, with today’s single-beam tools, EUV mask write times could range from 50 to 100 hours just for one leading-edge mask, a figure that is unacceptable in the industry, analysts said. In contrast, multi-beam mask writers promise to keep the write times down to a few hours or a half-day for all mask types.
Lab to the mask shop?
Bringing multi-beam mask writers from the lab to the mask shop is challenging. Indeed, after nearly a decade in R&D, these systems are still not in production amid a number of technical challenges.
Today, two entities, Intel/IMS and NuFlare, are separately developing multi-beam mask writers. For some time, IMS has been developing a system with 262,144 programmable beams. The 50-keV tool has demonstrated a half-pitch resolution of 30nm.
In 2013, IMS joined forces with JEOL to co-develop tools. IMS provides its multi-beam technology, while JEOL is the systems integrator. The goal is to ship a high-volume mask writer in 2016.
Meanwhile, NuFlare recently disclosed the details of its multi-beam tool. The 50-KeV system, dubbed the MBM-1000, consists of roughly 250,000 beams, according to Hiroshi Matsumoto of NuFlare. The tool, which is geared for 5nm, will ship in late 2017.
Intel/IMS and NuFlare both promise half-day write times, but they are taking different approaches. NuFlare’s tool performs the blanking functions at low voltage and then accelerates the current. In contrast, the Intel/IMS tool accelerates the current and then does the blanking.
Both vendors face similar challenges. “The nice thing about a multi-beam tool is that it is a deterministic machine,” Toppan’s Kalk said. “As long as you get the data down the datapath, it will write a mask in a time that’s determined by the overall area of the pattern you are writing.”
The big challenge is to move that data down the datapath. “It is not trivial to put down a quarter million beams with perfect fidelity and timing at high speeds. You are also talking about writing a mask in say 12, 14 or 15 hours,” he said. “That’s a lot of data to put down in the pipeline. The datapath is going to be critical. If we can’t get the data to the beam lines, it will slow (the system) down.”
There are other challenges. If or when the industry inserts EUV for production, mask makers must contend with the complexities of EUV masks. For EUV, the sub‐resolution assist feature (SRAF) sizes on the mask range from 32nm to 40nm, compared to 60nm for optical. The SRAF 1x design sizes range from 8nm to 10nm for EUV, compared to 15nm for optical, according to Mentor Graphics.
“Minimum feature size on the mask will decrease since k1 decreases, affecting both primary features and assist features,” said Peter Buck, manager of MDP and platform solutions at Mentor Graphics. “Mask layout specific compensation for EUV-specific optical effects, such as shadowing, flare and black border, may require full-mask layout OPC and will require full-mask layout MPC and fracture.”
Needless to say, EUV masks are complex. On the other hand, EUV masks will require fewer masks per set. So, in theory, a multi-beam mask writer could keep EUV mask write times in check.
But mask makers may want to keep their single-beam tools around for EUV, especially if the multi-beam tools are late or fall short of their promises. “While multi-beam writers are targeted to be faster than VSB writers for advanced layers, the complexity of massively-parallel e-beam writers makes these tools challenging to produce,” Buck said. “Innovation for VSB writers continues. It seems likely that both technologies will have significant roles to play in both advanced DUV and EUV mask lithography.”
Multi-beam mask writers are just one part of what’s required to make EUV happen in the mask shop. In the EUV mask infrastructure, the industry has made progress in some but not all areas.
EUV mask blanks are a bright spot. Last year, there were an average of 10 defects on an EUV mask blank, said Seong-Sue Kim, a technical staff member within the Semiconductor R&D Center at Samsung.
The industry hopes to bring that figure down to five defects, which can be achieved within the next year, Kim said. “There has been continued progress in blank defectivity,” he said.
Still, there are some major gaps. “Mask defects are still a key issue,” said Rich Wise, technical managing director at Lam Research. “You need actinic inspection.”
The industry is begging for actinic-based pattern mask inspection for EUV, but no such tool exists. So for now, the industry must use today’s optical and e-beam inspection to inspect EUV masks. “The best idea is to extend the existing platforms,” said Yalin Xiong, general manager of the Reticle Products Division at KLA-Tencor. “For the short term, this solution is good enough.”
Nevertheless, there are other technologies that must come together before EUV moves into production. EUV is targeted at 7nm or 5nm. “The source power is making progress, but there is still work to do,” Lam’s Wise said.
Yet most say it’s a matter of if and not when EUV will move into production. “EUV is making huge progress,” said David Fried, chief technology officer at Coventor, a supplier of predictive modeling tools. “For example, you have the pellicle, resist defectivity and other issues. Those things were dark clouds a few years ago. Now, the problems are clearly defined and there are competing solutions. People are working on them and those will be solved.”

http://semiengineering.com/multi-beam-market-heats-up/

Thursday, March 17, 2016

Ready For Nanoimprint?

Nanoimprint has been discussed, debated, and hyped since the term was first introduced in 1996. Now, a full 20 years later, it is being taken much more seriously in light of increasing photomask costs and delays in bringing alternatives to market.
Nanoimprint lithography is something like a hot embossing process. The structures are patterned onto a template or mold using an e-beam or scanner, and then pressed into a resist on a substrate. After that, the template is removed. In semiconductor lithography, this is a relatively simple process by comparison, which is why it has attracted so much attention.
Resolution has been well documented for this technology. But other key metrics—throughput, overlay and defect density—are still unproven. And that has set off a flurry of activity around nanoimprint, notably from Canon and Toshiba.
Canon’s imprint process is very different from conventional lithography. It starts with a pattern that is formed by ink-jetting drops of a UV-curable resist. A mold with the desired pattern is lowered into the liquid, which fills the mold. The resist is cured by a flash of UV light, and the mold is then separated from the pattern.
The process was invented at the University of Texas and was refined by the venture-funded startup Molecular Imprints. Canon acquired Molecular imprints in 2014. The challenges for imprint were obvious from the start. Could the liquid spread quickly? Could the patterns be overlaid to within single nanometers? Could the mechanical molding process be clean enough to yield devices? And could the 1x molds be made defect-free?
So where are we with throughput?
“We have developed a cluster tool system with four imprint heads and four stages,” said Kazunori Iwamoto, deputy group executive at Canon, in an interview last month at the Advanced Lithography Symposium. “The throughput has improved from [40 x300 mm is that 300mm wafers] wafers per hour in 2014 to 60 wafers per hour in 2016. What’s more, this platform will achieve more than 80 wafers per hour in 2017.
Iwamoto explained the throughput improvement comes from faster filling times of the imprint resist into the mold. To reduce the filling time, a faster spread of the imprint liquid is required.
Two techniques were described in this conference. One is the combination of a smaller drop volume (1 picoliter) and high drop density. This reduces the air bubbles during filling. The other was the development of a new imprint resist with faster spread and filling times. The throughput, imprint uniformity and defect density are also improved by design for imprint, or DFI.
“We do have some simple layout design rules,” explained Mark Melliar-Smith, CEO of Canon Nanotechnology (formerly Molecular Imprints). “The spread of the imprint liquid is sensitive to pattern density, so we require the use of dummy features in large, unpatterned areas much like CMP. We also require the top surface to be flat to similar tolerance for DOF (depth of focus) for 193 litho.”
Melliar-Smith emphasized that there were no additional constraints on scribe lines. “Our customers would not tolerate any changes.”
A separate element to design for imprint is drop-pattern management. “We have developed software to design the drop pattern to match the fill of the pattern, eliminate the trapping of air bubbles, and speeding up the spreading step,” Melliar-Smith said.
That will be critical for improving wafer throughput. Iwamoto said that the long-term goal of 200 wafers per hour will require larger imprint fields.
Overlay
One piece that is critical to this whole process is overlay, which is the ability of a lithography scanner to align and print the various layers accurately on top of each other.
“Current mix and match overlay (MMO) is at 4.8nm 3 sigma, and the goal for next year is 4nm which will meet production targets for NAND and DRAM,” said Iwamoto. “In 2018 MMO will improve further to less than 3.5nm.”
He noted that the current MMO error includes a large wafer distortion error coming from the prior lithographic levels. Reduction of that error is key to MMO improvement. Canon has developed something called High Order Correction (HOC), and also a new wafer chuck for imprint. The HOC correction system uses a second light source that can be modulated using a digital mirror device. The light locally heats the wafer and mask, and because of the difference in expansion thermal coefficient, local wafer distortion corrections can be made.
He showed data that HOC reduced wafer distortion errors in a single field from 2.5 nm to 0.67nm. “In addition, we developed a new wafer chuck to improve the flatness around the wafer edge by using special tooling, to help us to meet production overlay specification.”
Defects
There are three defects that Canon is concerned with—mask, in process random, and in process adders often expressed as mask life.”
The company has demonstrated five defects cm² for a 2xnm half pitch pattern, using masks made by DNP. The goal for engineering release is 1 percm², and production release 0.1 per cm².
In a presentation Toshiba showed lower values of 1 defect cm². MS suggested that the lower value measured by Toshiba was probably a reflection of the production environment at Toshiba. The causes of these defects were ion contamination and trapped surface bubbles, and they are working on mitigation. Toshiba also showed a 4 wafer run with no added repeating defects, a critical capability.
In a presentation, DNP presented data on 2x nm masks and mask copies.
They have made 2x nm patterns with a 1-2 defects per mask by using the current mask replication tool. An audience member asked “are you ready for production?”
Answer “yes”.
DNP also showed data for 1x nm parts with 10 defects per mask. There was a discussion of this problem caused by trying to separate 2 stiff mask blanks.
“I have complete confidence that the 1xnm will be as good as 2x nm very quickly. We understand the problem and DNP is making rapid progress,” said Melliar-Smith.
Iwamoto emphasized that Canon is just now developing a new mask replication tool to support a mass production towards 1xnm.
Finally, Iwamoto showed results for airborne particle adders as an indicator for mask life. Canon has applied its materials expertise to treat equipment surfaces and has developed an air curtain around the imprint head to protect the wafer. “The results suggest a mask life in excess of 1,000 wafers, the production goal,” said Melliar-Smith
There are two early adopters of the technology, Toshiba and Hynix. Canon says that is enough to reach critical mass for high-volume manufacturing. “We have to start small and grow,” Melliar-Smith said. “Today, we probably do not have the bandwidth for many more customers. If we can continue to show progress, other customers will be interested, and if we can get defects down another 100X, we can even use this for logic.”
EUV also is making progress, but probably only has 1 generation before it has to add multi-patterning or much larger NA. “We think imprint has a long term future with resolution below 10nm, no shot noise, minimal layout constraints, and the potential of increasing throughput from larger fields that are not possible with optics,” he said.
Long time in development
Tatsuhiko Higashiki, of Toshiba’s Research and Development Center, began the imprint program inside of Toshiba a decade ago.
“Ten years ago, I was approached by my colleagues, to help them find a way to pattern 30nm pitch and below, which was beyond immersion at that time, and multi patterning and EUVL had not been developed,” said Higashiki. “I was researching high-resolution lithography such like interferometric lithography for the whole 300mm wafer area. However, the technology can expose only dense patterns. Suddenly Molecular Imprints visited me and I saw a way to create small test structures using a relatively inexpensive tool, so we started with an Imprio 200 system. At the time I did not imagine that imprint could be used as a high volume manufacturing tool.”
By February 2011, there were papers at the SPIE Advanced Lithography conference by Toshiba reporting on their results using a MII system. MII reported on shipping an imprint module that was being integrated by their equipment partner. Canon reported on their evaluation of their MII system. http://semiengineering.com/imprint-ngl/
In February 2014, it was announced that Canon was acquiring the semiconductor operations of Molecular Imprints. And in February 2015, Toshiba signed a definitive agreement with SK Hynix on joint development of next-generation lithography, targeting practical use in 2017.
“Last year, Toshiba presented in SPIE2015 that we tried a working memory device, with the critical layer patterned using a Canon imprint ADT (advanced development technology) tool. “I have confidence to imprint as a future patterning solution,” said Higashiki. “Today we have 50 companies in the supply chain engaged in imprint. We have added several imprint ADT tools on a Canon platform with an MII Imprint head. “
Toshiba talked about the growth in the ecosystem, which today includes Shibaura (mask etcher), NuFlare (EB writer and mask inspection), as well as Canon, TEL, Zeon, TOK, Fuji Film and JSR.
But there is more work ahead, Higashiki noted. “To run production in memory, today’s defect density of 5 cm² must come down by 5X. This is still 100X higher than the level needed for logic. The higher defect tolerance is a direct result of error correction software that runs on memory. Overlay will be at 2 to 3 nm which will be good enough for memory.”
The template also requires much work as it remains very demanding for resolution, distortion and defects. This requires access to a very specialized set of process equipment to be successful.

http://semiengineering.com/ready-for-nanoimprint/

Wednesday, March 16, 2016

After PCs and mobile, a chip battle for VR devices is brewing


Millions of people will buy VR headsets in the coming years to play games and view 3D content, and those sales could spark a real-world war among chip-makers.
Some new VR headsets announced at this week's Game Developers Conference in San Francisco are full-on computers, and others need to be hooked up to PCs with powerful graphics cards, similar to Oculus Rift.
The VR devices, designed for gaming and for roaming 3D worlds, are a showcase for the graphics technologies of chip makers like AMD and Qualcomm.
The headsets set up a battle in the fast-growing VR space among chip makers, which also include Intel and Nvidia. The war among chip makers comes with a twist because it places a large on emphasis on GPUs, which are important for rendering 4K video and 3D content.
The Sulon Q headset, announced this week, is a full computer in a headset for VR and augmented reality, and it runs on AMD's chips code-named Carrizo. The headset is similar to Microsoft's HoloLens, and it allows users to interact with 3D objects that show up as floating images, much like holographic projections. It has a 2560 x 1440 OLED display and a graphics processor that can run high-end games.
But with PC components crammed into the headset, questions remain about whether it will heat up and be uncomfortable to wear.
The Carrizo chips include a lighter version of the high-powered Radeon GPUs used in desktops linked to Oculus Rift VR headsets. AMD believes VR headsets will need powerful on-board graphics processors for life-like images and high frame rates, or else the experience could get nauseating.
By comparison, Microsoft's HoloLens, which will ship soon, has a low-power Intel processor code-named Cherry Trail, designed for tablets. The GPU in Cherry Trail isn't as powerful as the AMD GPU in Sulon Q, but provides the HoloLens with a longer battery life. The HoloLens is focused more on blending real and virtual worlds than on high-end gaming.
Qualcomm's latest Snapdragon 820 chip -- which is in smartphones like Samsung's latest Galaxy S7 -- is also coming to VR headsets.
A headset from China-based Goertek is among the first with a Snapdragron 820 chip, which is capable of rendering 4K video and 360-degree interactive video.
"Several other VR devices that use Snapdragon 820 will be announced later this year," Tim Leland, vice president for product management at Qualcomm, said in an email.
Right now, most of the VR headset chips are adapted from those found in mobile devices or PCs. But if VR headset shipments hit tens of millions of units, Intel, AMD and Qualcomm may develop specialized chips, said Nathan Brookwood, principal analyst at Insight 64.
Making chips can be expensive, and the economics have to make sense, Brookwood said.
But the rise of VR reinforces the importance of visual computing and graphics processors, Brookwood said.
Gartner is projecting VR headset shipments to reach 1.4 million this year, growing from 140,000 in 2015. Shipments will grow to 6.3 million by 2017, the company predicts.


http://www.pcworld.com/article/3044522/hardware/after-pcs-and-mobile-a-chip-battle-for-vr-devices-is-brewing.html

Tuesday, March 15, 2016

SK Hynix ranks No. 3 in global chip sector

Seoul-based SK hynix became the third largest chipmaker in the world for the first time in 2015, according to an industry report Tuesday.

SK Group’s chip business affiliate earned $16.5 billion revenue -- the third largest in the world -- last year, taking up 4.8 percent of the global market share, according to a report released by market researcher IHS.

The Korean firm pulled down its rival Qualcomm to the fourth place. The California-based chipmaker logged $16.4 billion in revenue.

Samsung Electronics, which has been trying to catch up with the long-time market leader Intel, saw its sales surpass the $40-billion mark in the same year, at $40.2 billion, for the first time. The U.S. chipmaker raked in $51.4 billion.

Samsung clinched 11.6 percent of market share as against Intel’s 14.8 percent.

Japanese chipmakers’ presence in the global market has dwindled from 15.7 percent in 2010 to 9.8 percent in 2015. Japan used to be a chipmaking powerhouse in early 2000s with a global market share of around 20 percent. However, major Japanese tech firms Panasonic, Hitachi, and Toshiba have shifted their focus away from chip businesses due to challenges posed by Korean and Taiwanese chipmakers.

http://www.koreaherald.com/view.php?ud=20160315000564

Monday, March 14, 2016

imec adds to its cleanroom space to push beyond 7nm

Belgian nanoelectronics research centre imec has opened a new 300mm cleanroom. The cleanroom represents an investment of more than €1billion, of which €100million has come from the Flemish Government and the remainder from imec’s industrial partners.
The 4000m2 facility increases imec’s cleanroom space to 12,000m2 and will play a central role in work to take IC technology beyond the 7nm node.
“Since our founding in 1984, imec has become the world’s largest independent nanoelectronics research centre with the highest industry commitment,” said imec CEO Luc Van den hove. “The extension of our cleanroom provides our partners with the necessary resources for continued leading edge innovation and imec’s success in the future within the local and global high-tech industry.”
- See more at: http://www.newelectronics.co.uk/electronics-news/imec-adds-to-its-cleanroom-space-to-push-beyond-7nm/116633/#sthash.cEhNTowo.dpuf


Belgian nanoelectronics research centre imec has opened a new 300mm cleanroom. The cleanroom represents an investment of more than €1billion, of which €100million has come from the Flemish Government and the remainder from imec’s industrial partners.
The 4000m2 facility increases imec’s cleanroom space to 12,000m2 and will play a central role in work to take IC technology beyond the 7nm node.
“Since our founding in 1984, imec has become the world’s largest independent nanoelectronics research centre with the highest industry commitment,” said imec CEO Luc Van den hove. “The extension of our cleanroom provides our partners with the necessary resources for continued leading edge innovation and imec’s success in the future within the local and global high-tech industry.”


http://www.newelectronics.co.uk/electronics-news/imec-adds-to-its-cleanroom-space-to-push-beyond-7nm/116633/

Friday, March 11, 2016

Samsung Warns of 2016 Tech Gloom, Opens Up Chairman's Seat

Samsung Electronics Co. warned of rising competition across businesses from smartphones to memory chips, again sounding a dour note for the global technology industry in 2016.
The world’s largest smartphone vendor faces another difficult year after a 2015 plagued by economic turbulence and volatile exchange rates, Chief Executive Officer Kwon Oh-Hyun said in a letter to shareholders ahead of their annual meeting on Friday.
At the gathering, Samsung formally adopted a proposal to allow non-CEOs to take up the chairman’s role for the first time, a move that signals efforts to improve governance. The pool of candidates now encompasses qualified executives as well as independent board directors, said Kelly Yeo, a spokeswoman for the company. South Korea’s largest conglomerate is in part reacting to criticism after the sale of a subsidiary to another unit, in a controversial 2015 deal that helped cement the Lee family’s control of the empire.
More immediately, Samsung Electronics -- the maker of Galaxy smartphones and the group’s crown jewel -- is fighting to protect its market share from Apple Inc. and Chinese rivals like Huawei Technologies Co. and grappling with declining semiconductor and consumer electronics prices.
“We expect core products of our company, such as smartphone, TV, and memory, will face oversupply issues and intensified price competition,” the CEO said in his annual letter. “We expect that 2016 will also be a tough year.”

First Mover

Samsung Electronics in January reported a quarterly profit that fell short of expectations by almost 40 percent and said it expects slowing demand for smartphones and more global economic headwinds this year. The company is investing in new technologies such as foldable mobile displays to try and boost profit.
“Even under the challenging circumstances, we will renew everything from our product development and management to the organizational culture in order to lead the new era and become a true first mover,” Kwon said.
Apple -- Samsung’s biggest customer according to data compiled by Bloomberg -- has predicted its first sales decline in a decade. CEO Tim Cook said the maker of iPhones was seeing “extreme conditions” unlike anything it had ever encountered, with economic growth in China at its weakest pace in 25 years.

http://www.bloomberg.com/news/articles/2016-03-11/samsung-warns-of-competition-as-investors-vote-on-board-changes

Thursday, March 10, 2016

China's tech ambitions puts South Korea on alert

China's push to become a world leader in high-tech industries has one neighbor particularly worried about new competition on the block: South Korea.
In the mainland's new economic blueprint unveiled on Saturday, known as the Five-Year Plan, Chinese Communist Party officials identified semiconductors as a potential tech sector to dominate. That has raised an alarm in South Korea's semiconductor industry, the world's largest after the U.S. with an 18 percent global market share.
At present, China commands just 3 percent of the global semiconductor market share but Beijing is hoping to increase that figure as part of its plan for new services industries, dubbed "New China," to bolster gross domestic product (GDP). Aside from semiconductors, "New China" sectors also include chip materials, robotics, aviation equipment and satellites.
Officials intend to achieve that goal by increasing the share of spending on research and development (R&D) to 2.5 percent of GDP for the 2016-2020 period, from 2.1 percent in 2011-2015, according to the new Five-Year Plan.
"China's announcement has of course not remained unnoticed, especially by large players in high-tech industries," economists at investment bank Natixis remarked in a report on Tuesday.
"Its aggressive push is worrying for [South] Korea's industrial giants. If we consider that Korea's major global comparative advantage is high-tech electronics, such threat becomes a systemic threat for the country's economic future."

A 'bottom-down' model

South Korea's semiconductor industry is certainly paying attention. A day after the new Five-Year Plan was announced, Korea's Semiconductor Industry Association (KSIA) urged President Park Geun-Hye's government to counter the new market threat.
"I thought that China had attempted to invest only in the semiconductor industry but it seems that China has gone a step further," KSIA Chairman Park Sung-wook was quoted as saying on Sunday, referring to Beijing's aspirations to become a major semiconductor maker.
Leading Korean producers such as Samsung and SK Hynix should be worried, Natixis argues, citing three key factors.

Heavy consumption

China is already the largest consumer of semiconductors globally, which should support its domestic producers, Natixis explained.
"This is particularly relevant for Korean firms since they serve the Chinese market in quite a massive way."
After Intel, Samsung and SK Hynix are the biggest semiconductor suppliers in the Chinese market.
The mainland is South Korea largest trading partner and the exchange of goods between the two nations is set to ramp up in the wake of last year's Korea-China Free Trade Agreement.

A ‘bottom-down’ model

Beijing has also unveiled new steps that demonstrate its commitment to becoming a semiconductor superpower.
China has strived to become a global player for a decade now but it hasn't achieved success thus far due to its insistence on a state-led centralized approach to industrial development, Natixis said. Now, officials are embracing a more market-oriented method that encourages competition and allows companies to tap public funds to buy expertise abroad.
For example, China created the National Integrated Circuit Industry Equity Investment Fund in 2014, endowing it with $18.4 billion. Moreover, the Ministry of Industry and Information Technology intends to spend $153 billion over the next decade to support the semiconductor sector-the bulk of which will be spent on buying expertise from foreign competitors, according to Natixis.
"This obviously increases China's competitive threat [to Korea] in as far as they are able to execute appropriate merger & acquisition (M&A) deals in this sector."
Chinese investors have already started snapping up semiconductor assets. Last year, a consortium of mainland private equity firms snapped up U.S. firm Omnivisions Technologies for $1.9 billion in cash while a separate group of Chinese investors bought Nasdaq-listed Integrated Silicon Solution for $640 million.


<p>China's tech ambitions threaten South Korea</p> <p>Beijing unveils its new economic plan with a focus on taking over the semiconductor industry.</p>
China's push to become a world leader in high-tech industries has one neighbor particularly worried about new competition on the block: South Korea.
In the mainland's new economic blueprint unveiled on Saturday, known as the Five-Year Plan, Chinese Communist Party officials identified semiconductors as a potential tech sector to dominate. That has raised an alarm in South Korea's semiconductor industry, the world's largest after the U.S. with an 18 percent global market share.
At present, China commands just 3 percent of the global semiconductor market share but Beijing is hoping to increase that figure as part of its plan for new services industries, dubbed "New China," to bolster gross domestic product (GDP). Aside from semiconductors, "New China" sectors also include chip materials, robotics, aviation equipment and satellites.
Officials intend to achieve that goal by increasing the share of spending on research and development (R&D) to 2.5 percent of GDP for the 2016-2020 period, from 2.1 percent in 2011-2015, according to the new Five-Year Plan.
"China's announcement has of course not remained unnoticed, especially by large players in high-tech industries," economists at investment bank Natixis remarked in a report on Tuesday.
"Its aggressive push is worrying for [South] Korea's industrial giants. If we consider that Korea's major global comparative advantage is high-tech electronics, such threat becomes a systemic threat for the country's economic future."

A 'bottom-down' model

South Korea's semiconductor industry is certainly paying attention. A day after the new Five-Year Plan was announced, Korea's Semiconductor Industry Association (KSIA) urged President Park Geun-Hye's government to counter the new market threat.
"I thought that China had attempted to invest only in the semiconductor industry but it seems that China has gone a step further," KSIA Chairman Park Sung-wook was quoted as saying on Sunday, referring to Beijing's aspirations to become a major semiconductor maker.
Leading Korean producers such as Samsung and SK Hynix should be worried, Natixis argues, citing three key factors.

Heavy consumption

China is already the largest consumer of semiconductors globally, which should support its domestic producers, Natixis explained.
"This is particularly relevant for Korean firms since they serve the Chinese market in quite a massive way."
After Intel, Samsung and SK Hynix are the biggest semiconductor suppliers in the Chinese market.
The mainland is South Korea largest trading partner and the exchange of goods between the two nations is set to ramp up in the wake of last year's Korea-China Free Trade Agreement.
<p>This is the big concern for South Korea: FinMin</p> <p>The export-driven nation is most worried about external factors causing an economic slowdown, says South Korean Finance Minister Yoo Il-ho. </p>

A ‘bottom-down’ model

Beijing has also unveiled new steps that demonstrate its commitment to becoming a semiconductor superpower.
China has strived to become a global player for a decade now but it hasn't achieved success thus far due to its insistence on a state-led centralized approach to industrial development, Natixis said. Now, officials are embracing a more market-oriented method that encourages competition and allows companies to tap public funds to buy expertise abroad.
For example, China created the National Integrated Circuit Industry Equity Investment Fund in 2014, endowing it with $18.4 billion. Moreover, the Ministry of Industry and Information Technology intends to spend $153 billion over the next decade to support the semiconductor sector-the bulk of which will be spent on buying expertise from foreign competitors, according to Natixis.
"This obviously increases China's competitive threat [to Korea] in as far as they are able to execute appropriate merger & acquisition (M&A) deals in this sector."
Chinese investors have already started snapping up semiconductor assets. Last year, a consortium of mainland private equity firms snapped up U.S. firm Omnivisions Technologies for $1.9 billion in cash while a separate group of Chinese investors bought Nasdaq-listed Integrated Silicon Solution for $640 million.

Shift to mobile

Lastly, Korean semiconductor manufacturers tend to focus more on computers rather than mobile handsets, demand for which is growing at a faster clip. Because China dominates mobile demand, it is ideally placed to profit from semiconductor growth.
Samsung Electronics and SK Hynix are the world leaders in DRAM chips, key for personal computers, so as demand for those chips decline, semiconductor profits at both firms have slowed in recent quarters, Natixis said.
"Samsung and other Korean firms will need to push to achieve competitiveness in a higher tech level due to the changing nature of demand for chips as well as China's push for technology gains."
Asia's second-biggest player, Taiwan, isn't as impacted as South Korea since it only has about 7 percent of global market share, the French bank noted.

http://www.cnbc.com/2016/03/09/south-korea-tech-at-risk-as-china-steps-up-ambitions.html

Wednesday, March 9, 2016

Samsung launches dual pixel image sensor for mobile

Samsung is producing a 1.2 megapixel image sensor for smartphones that applies dual pixel technology, the company announced.
Produced by the firm's logic chip and contract making division System LSI, mass production coincides with the global rollout of the Galaxy S7 and S7 Edge series -- the first to have the sensors installed.
Dual pixel technology -- which speeds up autofocus by using 100 percent of the pixels compared with conventional smartphone sensors' 5 percent -- has been in use for DSLR cameras and shines in low light situations.

Pixel size has been boosted to 1.4 micrometer, and each of the 12 million pixels features two photo diodes that collect light. The diodes work like human eyes to compare images to control focus, achieved by the firm's ISOCELL technology.
The sensor is made with a 65 nanometer process, and the logic chip -- which transfers the light read by the sensor to digital signals -- has a 28 nanometer process to make it as small as possible for smartphones, the company said.
"We've applied dual pixel technology previously used in the niche professional camera market to mobile, and the sensor will become the best solution to take clear photos in low light," said Ben K Hur, Samsung's senior vice president of marketing at System LSI.
The company declined to comment on potential clients. Traditionally, Samsung begins supplying its latest semiconductor products to other vendors six months after its own flagship products.
Samsung is runner-up in sensors to Sony's over 50 percent market share.

Its memory division is also upping chip specs, launching the 256 GB UFS 2.0 using 3D V-NAND that will likely make it to Samsung products launched later this year.
Samsung and LG have been highlighting the different camera features of their respective flagship phones before their release. LG G5 uses a wide angle dual camera for its 16-megapixel rear camera. A camera module can also be attached to simulate a DSLR experience.

http://www.zdnet.com/article/samsung-launches-dual-pixel-image-sensor-for-mobile/

Tuesday, March 8, 2016

Trouble brews for chip makers as neon shortage looms

A looming shortage of neon gas threatens to create problems for manufacturers of semiconductors and the devices they power beginning in 2017.
Producers of the latest computer and cell phone chips use a laser-enabled photolithography technique to create transistors and other device features. Deep ultraviolet lasers, which contain neon gas as a buffer, have made it possible to pack an increasing number of transistors on chips that now boast features as small as 14 nm wide.
Semiconductor makers had hoped to transition by this year to extreme ultraviolet lasers, which enable even smaller features but don’t require the noble gas. But delays in that technology mean the industry will continue to rely on neon-consuming lasers, “pushing up demand for neon beyond what the supply chain can support” by 2017, says Lita Shon-Roy, CEO of Techcet, a consulting firm that issued a report on the problem.
Chip makers, which account for more than 90% of global neon consumption, are already experiencing high prices and some shortages stemming from the Russian conflict with Ukraine, Shon-Roy says. The war, which started in 2014, interrupted global supplies of the gas, about 70% of which comes from Iceblick, a firm based in the Ukrainian city of Odessa.
Iceblick gathers and purifies neon from large cryogenic air separation units that supply oxygen and nitrogen to steelmakers. Most of the air separation units equipped to capture neon, which makes up only 18.2 ppm of the atmosphere by volume, are in Eastern Europe.
James Greer, president of PVD Products, a provider of pulsed laser deposition systems for academic research, says he expects the shortage to get worse. Greer’s customers are among the smaller users who also depend on neon.
The cost of a cylinder containing a mix of neon and other gases used in such systems has increased in the past two years from $1,200 to as much as $12,000, Greer says. Wait times for delivery have gone from four weeks to eight months.
Others who are likely to feel the effect of a neon shortage are ophthalmologists, who employ lasers for LASIK vision correction surgery; makers of superconducting wire; and manufacturers of neon lighting.

http://cen.acs.org/articles/94/i10/Trouble-brews-chip-makers-neon.html

Monday, March 7, 2016

7nm Lithography Choices

Chipmakers are ramping up their 16nm/14nm logic processes, with 10nm expected to move into early production later this year. Barring a major breakthrough in lithography, chipmakers are using today’s 193nm immersion and multiple patterning for both 16/14nm and 10nm.
Now, chipmakers are focusing on the lithography options for 7nm. For this, they hope to use a combination of two technologies at 7nm—extreme ultraviolet (EUV) lithography, and 193nm immersion with multi-patterning.
To be sure, the industry is begging for EUV, as it will simplify the patterning process at 7nm. But as it stands today, EUV is still not ready for high-volume manufacturing at 7nm, which is slated for 2018 to 2019.
EUV may happen at 7nm, but there is also evidence that the technology could slip and get pushed out to 5nm. EUV is making noticeable progress, although there are still issues with the power source, resists and mask infrastructure.
Commenting on the status of EUV for Intel, and perhaps the entire industry, Mark Phillips, a fellow and director of lithography hardware and solutions at Intel, said: “Introduction and production at this point is a question of when and not if. EUV lithography is highly desirable for the 7nm node, but we’ll only use it when it’s ready.”
With those factors in mind, foundries are moving in two directions. Right now, Intel and Samsung separately hope to insert EUV for select layers at 7nm, if the technology is ready. Both companies also plan to use immersion/multi-patterning at 7nm.
In contrast, TSMC appears to be going the multi-patterning route at 7nm. The company will “exercise” or develop EUV at 7nm, but it plans to insert EUV at 5nm. EUV may not be ready for TSMC’s 7nm rollout, although the company is keeping its options open.
Meanwhile, GlobalFoundries continues to weigh its 7nm lithographic options. It will likely insert immersion/multi-patterning first at 7nm.
In addition, chipmakers are also looking at other options for 7nm, including directed self-assembly (DSA) and multi-beam e-beam. Another technology, nanoimprint, is geared for NAND flash.
To be sure, it’s a confusing picture. To help the industry get ahead of the curve, Semiconductor Engineering has taken a look at several possible scenarios and the design implications at 7nm.
At 7nm, there are multiple scenarios. Each chipmaker may follow a different path. But in general, the industry is looking at four main patterning scenarios at 7nm:
1. A chipmaker doesn’t insert EUV at 7nm, but rather it uses immersion/multi-patterning exclusively.
2. A chipmaker uses immersion/multi-patterning first. Then, EUV is inserted later in the flow where it makes sense.
3. A chipmaker inserts immersion/multi-patterning and EUV simultaneously.
4. A chipmaker uses an alternative technique, such as DSA and multi-beam.
Winners and losers
It’s difficult to predict which scenario will prevail based on past events. Years ago, for example, the industry predicted that 193nm wavelength lithography would hit the wall at 45nm. Then, the industry would insert a next-generation lithography (NGL) technology, such as EUV, multi-beam or nanoimprint.
Clearly, that prediction was wrong. Today, NGL remains delayed and is still not ready, while 193nm immersion has defied physics and remains the workhorse technology in the fab.
But given the patterning challenges at 10nm and beyond, the industry is in dire need of a new solution.
For one thing, scaling today’s 16nm/14nm finFET to 10nm and 7nm is difficult. In finFETs, there are four parts that require patterning—fin; gate; metal; and via. Each part may require a different tool type or technique. And there are different options for each piece.
For that reason, lithographers will need a range of technologies in their tool boxes. So which lithographic technologies will be the ultimate winners and losers?
“Everyone wants to know which technology is going to win—multi-patterning, EUV or DSA,” said David Fried, chief technology officer at Coventor, a supplier of predictive modeling tools. “It’s been my view that all three of them are going to win. They may all live in the same technology and flow in the foundry.”
There may even be a place for multi-beam. The decision to go with one option or another depends on several factors, such as manufacturability, pattern fidelity, throughput and yield, Fried said. “Everything gets back to cost.”
Scenario #1—No EUV
In any case, what are the patterning scenarios at 7nm? The first scenario is that chipmakers will not insert EUV at 7nm. Instead, they will exclusively use 193nm immersion/multi-patterning.
In this scenario, EUV may not be ready in time for a given chipmaker’s 7nm rollout. Or, EUV is ready or nearly there, but chipmakers are unwilling to take a risk until the technology is mature.
There are timing issues as well. “The end of 2017 is when I think the foundry 7nm risk production will start to ramp,” said Greg McIntyre, department director for advanced patterning at IMEC.
“In order to make that ramp date, you have to lock in your process assumptions roughly two years in advance. And then the design kits have to be ready a year in advance, which means (foundries) would have had to lock in their process assumptions a couple of months ago,” McIntyre said. “Although there has been great progress in EUV, it is a bit risky to lock in EUV as a process assumption in the past few months for two years out.”
This is not to say the industry wants multi-patterning over EUV. For example, with immersion/multi-patterning, there are 34 lithography steps at 7nm, according to ASML. With EUV alone, there are just 9 steps, according to ASML.
Indeed, EUV offers several advantages. The problem? EUV isn’t ready for mass production at 7nm, as there are still gaps with the technology, at least right now.
On the other hand, optical lithography and multi-patterning are ready. In fact, ASML and Nikon are already shipping 193nm immersion scanners designed for high-volume 7nm production.
But as before, 193nm wavelength lithography reaches its physical limit at 40nm half-pitch. To extend optical lithography, chipmakers must deploy a multi-patterning scheme in the fab.
Generally, though, multi-patterning involves more process steps in the fab, which, in turn, equates to complexity, longer cycle times and higher cost.
One multi-patterning scheme is called double patterning, sometimes referred to as litho-etch-litho-etch (getkc id=”191″ kc_name=”LELE”]). LELE requires two separate lithography and etch steps to define a single layer. LELE provides a 30% reduction in pitch. 7nm may require triple patterning or LELELE.
The other main schemes are self-aligned double patterning (SADP) and self-aligned quadruple patterning (SAQP). These processes use one lithography step and additional deposition and etch steps to define a spacer-like feature.
Each foundry tends to use different schemes at various layers. SADP/SAQP are sometimes used to pattern finFETs. LELE is used for the critical metal layers.
“Some are doing LELE,” said Rich Wise, technical managing director at Lam Research. “Some are doing SADP and SAQP. Most are doing a mix of the two, depending on the level you are talking about.”
In the fab, the big challenge is to execute a multi-patterning scheme with precision. In SAQP, for example, the spacer-based structure has three separate critical dimensions (CDs). “They all must be identical,” said Rick Gottscho, executive vice president of global products at Lam Research.
If they don’t match, there is unwanted variability in a device. All told, the goal is to reduce or eliminate variation using various process control techniques. “It comes down to process control,” Wise said. “It comes down to how well you control your deposition and transfer etch.”
There are other issues as well. “It also presents some overlay challenges,” said Uday Mitra, vice president and head of strategy and marketing for the Etch Business Unit at Applied Materials. “You also have the edge-placement error problem.”
Overlay involves the ability of a scanner to align the various layers accurately on top of each other. If they aren’t aligned, it causes overlay errors. Meanwhile, edge-placement error is measured as the difference between the intended and printed contours in a layout. Unwanted overlay and edge-placement errors can impact chip performance and yield.
Multi-patterning impacts other steps in the flow. “The number of layers is rising,” said Mike Adel, senior director of strategic technology at KLA-Tencor. “From a metrology point of view, this has a very significant impact. This is driving a significant amount of metrology.”
In any case, what does this all mean for the IC design community if 7nm is done using multi-patterning and without EUV?
“In general, more advanced nodes are migrating to more regular (i.e. restricted, unidirectional, etc.) layout styles,” said David Abercrombie, advanced physical verification methodology program manager at Mentor Graphics. “This provides advantages in process margin, as well as helping to simplify the multi-patterning decomposition in some regards. Without EUV, the requirements for more complex styles of multi-patterning like TP, QP and SADP will at a minimum require designers to deal with new types of errors related to these methods. For example, TP and QP errors are not simply odd versus even cycles. So the design teams need to go through a new learning curve versus what they were doing in earlier nodes. Decomposition won’t be a nightmare, but the cause and effect relationship between layout and error becomes much more abstract.
Abercrombie noted that this will drive two areas of innovation. “First, on the EDA side, the tools need to find creative ways to present errors and assist with debug. Second, design teams will need to innovate their own restrictive design methodologies that better guarantee correct by construction layouts,” he said.
Scenario #2—EUV plus multi-patterning
Another scenario is that chipmakers initially will insert immersion/multi-patterning at 7nm. Then, when EUV is ready, the technology is inserted in select layers down the road.
This scenario is the most desirable for chipmakers. “EUV has been delayed for a long time. During that time, 193nm immersion has been the workhorse for the semiconductor industry,” said Seong-Sue Kim, a technical staff member within the Semiconductor R&D Center at Samsung. “But in the case of 7nm, the situation is different. Of course, 193nm immersion has (advanced) technologically, but the problem is cost. The situation is we need EUV.”
There are technical issues as well. “I can make nice lines and spaces (with optical),” said Harry Levinson, senior fellow and senior director of technology research at GlobalFoundries. “But how many cuts do I need and where is the placement for those? And to make contact holes for these dimensions is a lot more challenging. That’s where the stress is going to be if we want to do it optically.”
Needless to say, chipmakers want EUV, but the insertion depends on the readiness of the technology. Today, ASML is shipping its latest version of its EUV scanner—the NXE:3350B. The 13.5nm tool has a numerical aperture of 0.33 and a 22nm resolution half-pitch.
By year’s end, ASML hopes to ship another version—the NXE:3400B. The new version has an upgraded pupil design for higher resolution.
In the field, ASML’s EUV tools are equipped with an 80-watt source, enabling a throughput of 75 wafers an hour. Tool availability is roughly 70% to 80%, which is below the industry’s target levels.
In 2016, ASML plans to ship a 125-watt source. But as before, chipmakers want a 250-watt source before they put EUV into production. ASML plans to demonstrate a 250-watt source this year or next.
“There is a pretty good chance that 125 watts will happen this year,” Imec’s McIntyre said. “Sometime through next year, we should hopefully see the ramp up to 250 watts. So it’s headed in the right direction. Because of that, there has been a lot more movement in materials development, pellicles and mask defectivity improvement.”
Still, the questions are clear: Will EUV be ready on time for 7nm? And when does it make economic sense to use it? “We must use EUV carefully,” Intel’s Philips said. “We need to replace at least three 193nm masks, plus other process steps in the flow for multiple patterning, in order for it to be cost effective.
“In short, we can’t use (EUV) everywhere,” Philips said. “The implications are that we will continue to use 193nm immersion everywhere possible in order to keep wafer costs in control.”
So assuming if EUV is ready, then what? At 7nm, chipmakers will implement some form of complementary lithography in the fab. In this technique, the first step is to make lines or gratings using 193nm immersion.
Then, the hard part is to cut the lines into exact patterns. For this, chipmakers hope to use EUV to make the cuts as well as the vias.
But still, chipmakers will require both EUV with multi-patterning at 7nm, a complex process at best. “By the time we get EUV inserted, it might require EUV with SADP,” Coventor’s Fried said. “It might also require SADP with DSA healing. It might be DSA in one layer and EUV in another layer.”
So, in any case, what are the design implications? “It is still not clear, however, exactly what design restrictions will be needed to make EUV work well,” Mentor’s Abercrombie said. “It may turn out that an EUV layer needs more restricted layout constraints than the same layer with advanced multi-patterning.”
Scenario #3—EUV is on time
The third scenario is perhaps the most remote possibility. EUV will arrive on time, and is inserted, for the early stages at 7nm.
“If EUV intersects the early 7nm timeline, which is very unlikely given the early design work beginning on 7nm, it would probably only be used on one or two layers that otherwise would require four masks,” Abercrombie said. “The risk this early in the EUV deployment lifetime is that if there are unexpected up time or quality issues, you could have significant process down time and delays in production until those issues are resolved. You might even see parallel flows on those layers, so that there is a multi-patterning back-up to the EUV layers ready to go.”
Scenario #4—Alternative approaches
Another option is e-beam or direct-write lithography. Direct-write uses an e-beam tool to pattern images directly on a wafer. It is attractive because it does not require an expensive photomask.
But the throughputs for today’s single-beam e-beam tools are too slow. So for years, the industry has been working on multi-beam e-beam technology to speed up the throughputs.
One company, Multibeam, is developing a multi-beam e-beam technology called Complementary E-Beam Lithography (CEBL). CEBL is designed to handle a select portion of the patterning process—line cuts.
“We are not an NGL, but rather we are a complementary technology,” said David Lam, chairman of Multibeam. “We can take full advantage of 1D layouts. We focus on the cuts.”

http://semiengineering.com/7nm-lithography-choices/