Rack, let’s play with trains

AI Generated - an image of a futuristic AI system that precisely controls simulations of various real-world applications such as model railways, environmental management and logistics networks. This image takes into account detail and technological accuracy without distortion, with trains, tracks and other systems controlled by artificial intelligence clearly displayed. AI interacts with the world through sensors, holographic displays and robotic arms, creating a balanced mix of creativity and advanced machine learning.

This reflection stems from the prompt of what friends would use a deep learning system for in their leisure time and what would interest them in raising their colleague in the form of artificial intelligence. Because the basis of human creativity is fun, let’s play… and trains are up next…

A model railway enthusiast probably envisions their intended layout in a faithful representation simulating real traffic, with as many self-service elements and details as possible, such as lighting, signals, switches, and gadgets. Also, perhaps the possibility of combining the appearance of mainline and suburban trains… Check out Google Images, some layouts are absolutely epochal, and it’s okay to pay admission to see them. Many layouts, however, are in households and not everyone sees them. We’ll transition to such a playing system step by step… and now… Racku, let’s play…

Chugging, chugging, the little train goes… Racku, do you see it?
When I skip the initial topic lightly and skip the part where it’s necessary to introduce A.I. individual ports serving inputs and outputs, When we skip the introductory persuasion to interest in teaching… Let’s just move on and use Racka, who already has the basic teaching scheme behind him and then I’ll explain why (probably, because I think so now)…

Racku, record a new optical input TT_01, fisheye, full control profiling…
In other words, it’s Your Racku and do whatever you want with her, including servo control. Artificial intelligence doesn’t need a converter to transform the image for the human eye, and we won’t make Rack’s job any easier.
Racku, record a new optical input TT_02, camera, Cooperation profiling. Camera output to the control panel.
A little polemic… humans need a “normal view” and to show where artificial intelligence should look at a given moment during teaching, and conversely, we encounter stimuli when we want artificial intelligence to tell us where to look. Here there must be a conflict when artificial intelligence shows us an image that is not visible through a regular optical input but is obtained through a fisheye lens. Also, how it will be displayed, whether as seen through optics or adjusted with regard to humans and their limitations, symbolizing only the limited view of ordinary optics.

Chugging, chugging, the little train goes… Racku, do you feel it?
The basis of simulating the learning model with the train is an oval, which unlike a circle, also has straight tracks, and there are several sensors in each track that are triggered by passing locomotives. In addition to the two optical systems, Racek also receives a sensory impression of where the train is, and it would be ideal to plan to add an acoustic sensor necessary for simulating derailments and parameterizing accompanying sounds. Among other things, this will be the basis for the future echolocation system.

Chugging, chugging, the little train goes… Racku, show it!
A simplified description of the system gives us a state where Racek sees what we need (not that it understands), then it sees something we don’t understand in its raw state, then it perceives the position of the train through sensors, and besides Racka’s voice control, it also has a sensor for echolocation, but a small microphone is enough and it hears the position of the train relative to the microphone and also differences in sound (after parameterizing sounds).
Another element is the locomotive control system; we need something sophisticated for Racka so that Racek receives data about the regulation by the user and can control this regulation after allocation.
I would like to remind you that putting all this into machine learning simulation in this way, without organization, will take many human years, and help is needed.

The goal is to obtain a data corpus that understands the train object, understands the position of the train, and sees it. Specifically, it shows it on the control panel with the provided operation of the camera TT_02. Sound parameterization is currently subordinate with little stimuli. There will also not be such quick processing of camera TT_01. Racek is still learning to control it based on experience with control and the image from TT_02.

Simulation in learning also includes monitoring the operation of the layout, which slows down in curves and accelerates on straight sections. Let’s not derail the train yet. After a successful simulation of the first layer of the corpus in control, we will add a few switches, where each pair reacts to the approaching train and adjusts to the direction of the approaching train… but apparently, I need to simplify it even more, so once again, another pass…

Chugging, chugging, the train goes…
For a complete start to the play, we need an oval-shaped layout, sensors for monitoring the position of the train on the layout, and the possibility of controlling the direction and speed of the train from the beginning as a skill. It is possible and likely that the train will sometimes start without our intervention, but we won’t leave it to our future colleagues, and we’ll show them how to work with the train.

For the next step, we’ll add paired switches, and the layout will get a “shortcut.” One switch is thrown, the other reacts to the approaching train, which is common in layout models, it’s about contact in the layout, which connects the metal wheels of the locomotive and the switch is electrically thrown. Our data corpus will learn to throw it, and it will be interesting to see when it first switches under a passing train.
At this point, expect results from a neural network controlling the train in both directions at different speeds and the possibility of choosing the route by switching the switch.
If the switch is thrown under a passing train, it’s a bonus because it indicates “interest” in further states. Teaching and assistance led elsewhere.

Another element in the layout is a blind track simulating a depot or transshipment point, then a parallel path along the oval with crossings with the main oval. Then adding parallel tracks as platforms and a set of blind tracks as a sorting yard and so on and so forth… let’s play.

This is how we quite easily, relatively cheaply, gain skills for the basic corpus of artificial intelligence, which plays with a certain consideration, its own consideration. If we start working on it roughly like this, we can enjoy the train’s movement and monitor its paths.
However, when the train is already rushing quite a bit and is controlled in various ways using possible routes, we can consider another layer, and place an optical sensor on the locomotive front. Here, I don’t mean a camera capturing a wide distant area, but only a small lens capturing the layout (spatial addressing of sensors) and immediate surroundings with marks and instructions for regulating the train’s movement.

Not building, not building… It’s not building anywhere…

At this moment, layouts marked with speed zones, instructions for stopping (platform boundaries), and the like, have two paths of continuation because we don’t have trained initial data, we can either observe thousands of passages and total disregard of marks or embark on simulations like Mother and Daughters.

However, more interesting are a thousand modelers collaborating on one deep learning corpus, in cooperation with interested school departments, which would take over the level of Mother’s coordination. The results can also be faster and more interestingly obtained.
From Mother’s results, a basic module for sharing skills can be created.

Goal one is the simulation of a presentation layout.
But in the future play of my friends, this part of the game is meant, when the layout loses its basic shape and the modeler’s fantasy enters, and it’s hard to predict the real/final shape of the layout.

Let’s leave the basic control the same for now and try to deviate a bit in consideration before completely releasing the module version and find out where to get data for simulation with marks before waiting for artificial intelligence to notice them.
This data for interaction with the environment can help us obtain real gaze recordings from train driver cabins. Thousands of hours of various recordings with the same movement and speed scheme could support this expectation of teaching refinement and enable the train model to be deployed instead of an optical sensor, a camera.
Then it starts to make sense to distribute the data corpus among modelers and use the precision of presentation layout constructions.
Connecting a camera to the locomotive front further expands teaching possibilities and leads us to the next goal…

Playing from God’s perspective
In games, this is meant from above, from a bird’s eye view, often with an adjusted view of the horizon, but here I mean already using spatial sensing to gain an overview of events and connect the context of correlations between trained skills with sensors and an overview camera.
Now comes the time for the train derailment and the consequences of not adhering to ordinary rules by the modeler such as too sharp angles, ascents and descents, sharp crossings of tracks, or even incorrect track markings, for example, assembled in the wrong order… and it’s necessary for this collision data to be seen by our future artificial intelligence and sensorily recorded.
Imagine a simulation of the punishment of returning to the depot and “doing” nothing before being able to set off again, a small child would go crazy with restlessness, an older one would refuse the rules, and an adult would just leave because they don’t have time… but our U.I. is just fine with this… (unless the basis is the formula I WANT to play! Just like I want to protect plants, I want to find stimuli… but it rather implies… I want “certain” Satisfaction). Our type of punishment can be repeated playback of the recording until the collision cause is marked and the reason is presented to the user.

For now, let’s consider returning the train to the depot and the possibility of a fresh start as non-conflicting. If there is an error in the traffic route, the basic skill is to exit the corpus with a warning about the inconsistent signaling system.

And how to continue playing? Try searching for the topic of modeling in Google, in many ways, it closely mirrors the real world and it’s possible to propose even very abstract combinations of stimuli for deep learning.
The next step is decentralization of computation when sharing human ingenuity across disciplines.

Original text: Racku, pojďme si hrát s vláčky – Mareyi CZ

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.