A New Programming Language for High-Performance Computers

supercomputer
A new tensor language developed at MIT with officially validated optimizations could provide benefits for high-performance computing.

One needs high performance computing when one has to navigate through huge chunks of data and do it at a reasonable speed.

Otherwise, the transactions can take a ridiculous amount of time. These are image processing on complex networks or various deep learning applications.

It is widely believed that there is an inevitable relationship between speed and reliability when performing such operations.

According to this view, if speed is the first priority, reliability will likely suffer, and vice versa.

However, a research team based mainly at MIT has been questioning this concept, claiming that you can actually have it all.

Tuff
Programming Languages

 

New Programming Languages

"Speed ​​and accuracy don't have to compete," said Amanda Liu, a sophomore doctoral student at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), with the new programming language they wrote specifically for high-performance computing. Instead, they can go hand in hand together in the programs we write,” she says.

Gilbert Louis Bernstein of the University of California at Berkeley, Assoc. Dr. Adam Chlipala and Assoc. Dr. They introduced the "A Tensor Language" (ATL), which they developed with Jonathan Ragan-Kelley.

Tensors were also mentioned in the presentation. Tensors are also generalizations of vectors and matrices.

Vectors are one-dimensional objects (usually indicated by individual arrows) and matrices are familiar two-dimensional arrays of numbers, while tensors are n-dimensional arrays that can take the form of, for example, a 3x3x3 array or higher.

The whole purpose of a computer algorithm or program is to initiate a particular computation.

But there can be many different ways to write a program and also write it much faster than others.

The basic logic behind ATL is this, he explains:

Given resource-intensive high-performance computing, you want to be able to optimally modify or rewrite programs to speed things up.

One usually starts with a program that is easiest to write, but it may not be the fastest way to run it, so more tuning is still needed.”

As an example, let's say an image is represented by a 100×100 string of numbers, each corresponding to a pixel, and you want to get an average value for these numbers.

It would be able to do this in a two step calculation by first determining the average of each row and then averaging each column.

ATL is a program with an associated toolset that computer scientists call a "framework" that can show how this two-step process can be turned into a faster one-step process.

“Using something called the proofing assistant, we can guarantee that this optimization is correct,” Liu says.

To that end, the team's new language builds on Coq, an existing language that includes a proof assistant.

The proof assistant, on the other hand, has an inherent capacity to prove his claims with mathematical certainty.

Coq has another unique feature that makes it attractive to the MIT-based group. Programs written in it, or its adaptations, always terminate and do not go into infinite loops (as, for example, programs written in Java).

“We run a program to get a single answer – a number or a tensor,” Liu continues. “A program that is never terminated is useless to us, but termination will be achieved with the use of Coq,” he says.

The ATL project combines the two main research areas of Ragan-Kelley and Chlipala.

Meanwhile, Chlipala is a toolmaker more focused on the formal (mathematical-based) validation of algorithmic optimizations.

This represents their first collaboration. Bernstein and Liu were brought in last year, and the result was the ATL program.

It now stands as the first and so far only tensor language with officially validated optimizations.

But Liu warns that ATL is still just a prototype, though promising that has been tested in a number of small programs.

“Looking forward, one of our main goals is to improve the scalability of ATL so it can be used for larger programs we see in the real world,” he says.

In the past, optimizations for these programs were often done manually, on a much more ad hoc basis, often involving trial and error and sometimes a lot of error.

With ATL, Liu adds, “people will be able to take a much more principled approach to rewriting these programs – and do it more easily and with greater assurance of accuracy.”

Source: MIT News

Similar Ads

Be the first to comment

your comment