Idaho National Lab’s digital engineering team relies on algorithms and verifiable data

Artificial intelligence and machine learning have emerged as important tools for modernizing systems and simplifying federal processes. Still, they need the right data to train the algorithms.

Humans train algorithms by using them, and over time, algorithms learn through a “deep neural network”.

Chris Ritter, head of the digital and software engineering group at Idaho National Laboratory, said the ultimate goal of artificial intelligence is to make a computer think like humans or outperform humans in predicting functions – programmed devices that can carry out analyzes themselves. The extension of general artificial intelligence, from a simple form of Google CAPTCHA to operating a nuclear reactor, is what his office is investigating.

“Much of the research that exists is curating the data and retrieving it in a format that is possible to achieve these scaling benefits and apply machine learning to some of our complex problems in the energy space. Knight continued Federal Monthly Insights – Artificial Intelligence and Data.

Aside from deep neural networks, which are a kind of “black box” that is not easy to check, Ritter said another type of algorithm is called “explainable or transparent artificial intelligence”.

“That means it’s mathematical. Law? So it is completely verifiable. And we can apply some regression techniques to punish those areas and you can make this a newer technique, ”he continued Federal Drive with Tom Temin. “And what a lot of people don’t think about is when you have a lot of data – image recognition is a good example, isn’t it? Then DNN – these deep neural networks – is a great approach. However, when you have less data, sometimes it is better to use a common statistical approach. “

In use cases like life safety and critical safety systems, it is important to review what the algorithm is doing and why it is doing it.

At Idaho National Laboratory, Ritter is engaged in digital engineering that uses key principles of modeling, building from a source of truth, and innovation, to name a few. The group has sought to change the way people work and produce data in buckets that engineers can already mine. Ritter said they are trying that approach instead of seeing how they can make an algorithm smarter. Let’s let people change their pattern a little.

In the area of ​​innovation, he cited the Versatile Test Reactor project as an example. The reactor is being built to perform irradiation tests at higher neutron energy fluxes than currently available and could therefore “help expedite the testing of advanced nuclear fuels, materials, instruments and sensors,” according to the Energy Department. Ritter said many university researchers were involved in the project who bring novel AI techniques to the table.

To ensure that the digital engineering of these large-scale projects in the lab delivers usable, real-world results, engineers create ontologies or blueprints so that the data can curate them. Examples of data could include equipment lists, computerized design files, costs, schedule information, risks, and data from plant operators, Ritter said. With these subsystems generating so much more data than anyone can see in an hour, predictive maintenance can spot anomalies and raise a red flag.

“Predictive maintenance is used in other applications and industries. So we know that this technique is quite possible on the design side – being able to use artificial intelligence in the design of an asset, ”he said. “I think we are still in the early stages of this idea.”

Comments are closed.