Nvidia’s business is increasingly the business of artificial
intelligence, and its latest partnership fits with that new role. The
graphics processing maker is supplying the Tokyo Institute of Technology
for the GPUs that will power its new AI supercomputer, which will be the fastest of its kind in Japan once completed.
Nvidia Tesla P100 GPUs, which use Pascal processing architecture,
will be used in the creation of the cluster, which will be known as
TSUBAME3.0, and which will replace TSUBAME2.5 with twice the performance
capabilities. Don’t feel too badly for TSUBAME2.5, however — it’s still
going to be in active use, adding its power to TSUBAME3.0’s projected
47 petaflops for a combined total of 64.3 petaflops in total — you’d
need a heck of a lot of iPhones to match that (like very, very insanely
many).
The goal is for TSUBAME3.0 to be up and processing by this summer,
where its prowess will be put to use in service for education and
high-tech research at the Tokyo academic institution. It’ll also be
available for private sector contracting, and the school says it can’t
wait to start teaching the new virtual brain.
https://techcrunch.com/2017/02/17/tokyo-institute-of-technology-taps-nvidia-for-japans-fastest-ai-supercomputer/
No comments:
Post a Comment