Blog

What are weights in AI?

Calender
November 17,2024
Category
Category :
Blog

Machine-learning models called artificial neural networks (ANNs) have taken the world by storm, transforming fields ranging from reviving endangered languages to accelerating drug discovery.

Every ANN has three components: nodes, edges, and weights. Scientists originally designed ANNs to mimic the learning behaviour of the human brain. The nodes behave like neurons: simple computers that accept an input signal, manipulate it in some way, and deliver an output signal. The connections between the nodes are called edges and they mimic synapses.

But unlike nodes and edges, weights exist entirely mathematically. A weight denotes the strength of an edge. The higher the weight, the stronger the signals transmitted along that edge, and the more attention the destination node pays to them. When an ANN ‘learns’ new information, it essentially adjusts the weights of different edges to further enhance the final result.

Recently, the Open Source Initiative (OSI) released a controversial definition of “open source AI”. The open-source paradigm is a model of transparency whereas the OSI’s definition allows the data on which an ANN trains to be hidden. This is a concern in many contexts but in some, like medical AI, hiding the data is essential.

To bridge this gap, security researcher Bruce Schneier has proposed the OSI’s definition be renamed “open source weights” instead: whereby an ANN’s weights are open source but not its training data. The implication is that an ANN with “open source weights” would still reveal how it processes input data rather without revealing the data.

This debate illustrates the important role weights play in AI.

Published - November 17, 2024 10:23 am IST