Neural Network

You don't have to possess a  profound understanding of how neural networks work to effectively use Stonito Lotto.

Understanding a basic configuration options will suffice.

Neural Network
{1} Input values count

This value represents the number of inputs in the neural networks. This number is calculated based on the network settings and what types of input data it will use.

{2} Hidden layers

Every Neural Network has at least two layers: one is the input layer, and the second is the output layer. They are implied and not seen here. Hidden layers are the layers interconnecting those input and output layers. Each hidden layer is represented only by the number of nodes it comprises. Every number represents the hidden layer with a particular number of nodes, starting from the input layer. The more layers you add, the network will be more complex. A complex network needs more time to train but can catch more intricate relationships between input and output data.

{3} Maximum Epochs

This is related to the training of the network. When this number of epochs is reached the training is stopped. One epoch is similar to one generation.

{4} Minimal error

This is also related to the training. When the minimal error is reached, the training is stopped. The error doesn't converge to zero because the prediction is not a deterministic problem.

{5} Use Date check

Use date of the drawing as in input

{6} Use Previous check

Use previous drawing numbers as inputs

{7} Number of previous rounds used as input

Defines how many previous rounds are used as inputs for training and inference. For example, if the numbers pool is 39, value of 3 meaning that in training for every round three previous will be used as inputs, that makes 87 input values in total.

{8} Use Incidence check

Use counts of each number is present in previous draws as inputs

{9} Use Cross Presence check

Use table of mutual presence of pairs of numbers in all previous draws

{10} Use Maximum Last Rounds check

History may be quite large, so including all the draws in training would lead to a lot of processing burden. It makes sense to limit the training to the last number of drawings. It's up to the user to find out the optimal number for a particular game.

{11} Only Use Rounds After check

Similar as previous, but only limit the date after which the draws are considered. This date does not limit the number of drawings actually included in training otherwise.

{12} Update existing check

If checked, the new network will not be created, but the existing will be updated. Otherwise, new network will be created and current network will be saved.

{13} Name of the network

This text is used to identify the network in the list of trained networks for particular system. It is saved upon completing training process only.

{14} OK button

Initiate process of training network. It may take some time. During the process the current values of epoch and error are updated for each finished epoch.

You are advised to use multiple networks with various settings and keep track of how well they perform in the future games.

You can adjust them any time you want.

To add a new network to the particula game just uncheck the Update existing checkbox.

After the training is completed, newly created network will be selected as active.

You can opt to delete the selected network from main menu. Deleting network is necessary only if you want to decrease the number of networks. Otherwise you can easily update settings and name of the network and retrain it to replace existing network.

In the main menu there is also an option to  Train all networks, which is used to retrain all the networks in a succession. The last that will be retrained is the network for patterns.

Neural Network for Pattern

Setting and training this network differs very little from the main neural network described in previous topic.

This network has less parameters and is much simpler to train and use.

Using this trained network you will be able to check any given combination in terms of how good it looks as a jackpot combination, based on previous draws.

Neural Network for Pattern
{1} Hidden Layers

The internal structure of neural networks represented by count of nodes in layers between input and output layers.

{2} Maximum Epochs

The training will finish when this number of epochs is reached.

{3} Minimal Error

The training will finish when the last error value is less or equal to this value.

{4} Limit to Last Rounds Only

If checked, the value entered limits the history draws used for training of the number to a last value rounds.

{5} Limit to a Date

If checked, the set of history draws is limited only to a draws coming after a selected date.

Network Performance

For an example system of Lutrija Srbije Loto 7/39, I trained the network using this setup.

There were three hidden layers, with 21, 171, and 78 nodes respectively. The input layer has 21 nodes (3 times 7) and the output has 39 nodes.
Every node in one layer is connected with all the nodes of the adjacent layers, and those connection weights are what are adjusted in the training process to produce the best possible results on the training set.
There is no way to tell which configuration of hidden layers is the best, so this is only one of the myriad of possibilities.
The network finished training after 33404 epochs (or steps), reaching a minimal error of 0.141... which is a relatively high value of error, meaning its performance on the training set will not be perfect either.
For the training, I used the last 700 rounds, which is approximately half of the whole history. The network stopped because one of the criteria for stopping was reached, namely the number of epochs without significant improvement.
Here are the results of the trained network prediction over the whole history.
TRAINED stands for a set of rounds included in the training. It is normal that the trained network has better performance on that set. During the training network tries to adjust its internode connection coefficients to have the best possible results on the training set. If the network has broken the system, the results of this training set would be all 7.
EXCLUDED stands for a set of rounds that are not part of the training set. For this test, they are similar to future rounds, unknown to the network as they were not part of the training set. But they nevertheless were drawn in the past by the same system, so they are a good prediction of how this network will perform in the future too.
You notice also that even on the training set there are some rounds with zero winning numbers. However, there are also significant wins.

Training Settings

Stonito Lotto makes use of two different types of neural networks:
Training Settings
{1} Number prediction tab

This tab page is for setting up the networks used for getting number prediction. Result of those network are the probabilities (0-1) for each number to appear in the next draw.

{1} Winning pattern tab

This is tab page for setting Winning Pattern Networks. They have the same settings as the Number Prediction Networks, except for the last (at the bottom) setting. The result of it is a similarity (0-1) of any combination to the previous winning combinations.

{3} Stop on epochs count

The training will stop regardless of minimal error if this number of epochs is reached.

{4} Stop on error not changing

Error changes on each epoch. The training will stop if the error representation in defined number of decimal places doesn't change in this count of epochs.

{5} Stop on error

The training will stop if the minimal error is reached, regardless of epoch count.

{6} Decimal places for error

Defines how many decimal places are used to represent an error counted in every epoch.

{7} Factor for not winning combinations

Used only for winning pattern network. It defines how many non-winning combinations are included in training set. For example, if you have 1000 combination in history draws, factor 2.0 means that the training set will include 2000 non-winning random combinations and 1000 winning. The factor 2.5 would make for 2500 non-winning combinations.