Archives

Neural networks in the management of biotopes and a means of supporting healthy plant growth

Today, the content is an older post from 2019, but still valid when looking at the polemic of decision-making and the goals for decision-making. The text is intended to show that the task of AI is not to water the flowers, but to support their healthy growth in the full spectrum of habitat needs.


Practically every week, a new experiment with deep learning and data corpus acquisition can be found, which analyzes, predicts, translates, and suggests… Coverage is becoming comprehensive. Thanks to the decrease in technology prices and the basic sharing of deep learning principles, more scientific groups are engaging in experiments, and the commercial sphere is not lagging behind. Even for a small company, it is possible to set up a server that meets the requirements of machine learning and cultivate its data corpus on available samples. But what is lacking today are experts in machine learning and also a selection of ready-made solutions. Modulations that we could immediately incorporate and save time in other research axes. Are we really at the very beginning?

To avoid going too far in the example, one cell of research involves the deployment of shared deep learning with the aim of collaborating A.I. in the symptomatic management of a botanical greenhouse with subsequent resolution of xBiotopes, where the size of the biotope no longer matters.
In the example, there are aphids at the beginning and at the end, a data corpus collaborating with processing and information retrieval modules, enabling further layers of learning for user engagement on their educational basis.
I have been playing with this idea for a long time, and money for a server and people is still missing 🙂 Little things in the researcher’s world 🙂
Which does not prevent describing the principle, which can be deduced from today’s principles achieved in the field of AI, and it is not even about Sci-Fi; the only thing missing in today’s reality is available computing power…

Aphids are an excellent example of manifestations in nature, which have a multispectral impact on learning, and the acquired data corpus is destined for development in areas of other parasites, cataloging manifestations on various plant species, and linking acquired information with the data world of encyclopedias.

At the beginning, there was a photo… and it was an Aphid
This part of the block should be about education, but there is enough description of deep learning to understand the basic principle, and I allow myself to stick with a layman’s view.
Teaching a data corpus through machine learning to see and recognize aphids is easy and imaginable. All it takes is thousands of pre-prepared photos, several video sequences, and… I’m really simplifying it, but still, teaching a computer to see aphids today is an easy matter.
If you doubt it, just look here, for example: https://developers.google.com/machine-learning/crash-course/ or at older texts here: https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471.

Google on the Machine Learning pages shows how to think about preparing resources and how to avoid problems right at the beginning, and why fewer goals are better than more.
There is no need to doubt for a moment that in education, I would prefer to involve the whole garden, greenhouse, aquarium, prepare any biotope for acceleration… and involve a camera system, home control, photovoltaics… but even if I omit the necessary hardware background to shorten the processing time, there are so many disparate inputs and outputs that the result cannot even be estimated.
It is necessary to start from scratch and design Learning Direction Goals so that only the boundaries of learning are expanded. The greater the connection with the existing acquired data, the faster the development and grasp of the context.

Another way is the layering of skills, for example, a trained deep learning system with the collection of image data in a botanical garden can easily be expanded with sensors at the input, which will provide the system with additional contextual data in a deeper layer of context. Artificial intelligence can thus monitor the work of automation, semi-automation, and human intervention and learn.
Automation will soon be replaced because irrigation timers, lighting, air conditioning, and heating will learn to control based on available sensors, and then the roles will be reversed, the human being will control the work of the neural network with the same observation sensors.

Aphids as the basis for the neural system of symptomatic technology
I like aphids for tests about as much as geneticists like fruit flies. They are easy to keep, breed, and transfer to new biotopes. Aphids are slow, so it’s easy to take extreme photos of all stages of life manifestations. Development from an individual to a colony, subsequent growth of the winged form, which flies a little further, discards its wings, and everything continues in a new place. It’s so easy to observe the colonization of plants and the gradual occupation of rosettes and branches, all the way to aphid plains on leaves or pear aphids on flowers and young fruits.
That is, easy preparation of comprehensive samples for initial differentiation of “leaf” and “aphid”, through “plant” and “aphid”, when we are already working with parts of plants and various details of the whole plant, up to targeted recognition of aphids even on the underside of leaves in the shade because the leaf curls according to the learned manner of aphid infestation manifestations during observation of an unaffected biotope by a camera system and targeted inoculation with aphid seedlings.
Treatment is also easy with many available means, including a wide range of organic solutions. The goal of the neural network is to protect plants, not just to recognize aphids, so manifestations leading to plant destruction due to pests must be accompanied to a large extent by the “application manifestation” of the protective agent and subsequent display of the healed plant as the goal.

Aphids are always accompanied by two other life forms that are so different that they cannot be mistaken for aphids, and both are covered by the existing curriculum. As a bonus, one life form collaborates with the plant, and the other with the aphid… Do you already know which ones?

Ladybug and Ant
When artificial intelligence at today’s level of deep learning shows us the initial manifestation of a parasite with subsequent unwanted colonization aimed at plant destruction with the established goal of “plant protection,” let’s assume that such a system will alert us as soon as optical systems detect aphids, even the first one. Whether it’s flying or carried by an ant.
Ants and aphids really cannot be mistaken, but at the beginning of the training, part of the pixelation would interfere with each other. However, it is important to observe ants that tend aphids. The subsequent bonus is several species of ants within the reach of optical systems monitoring the plant environment.

Another direct companion is the seven-spot ladybird, whose ranks have expanded, reportedly gradually replacing the more aggressive Asian form.
A.I. thus has several varieties available and can “monitor which ladybirds,” which is a small bonus compared to observing aphid elimination by ladybirds. This moment of capture interests me greatly because the neural network must evaluate the ladybird as a positive means of treatment protection. Moreover, it must protect plants and ladybirds and ultimately warn against the application of a protective agent with a reasonable possibility of a bio solution. The ladybird, with its shape and shields, affects other teaching and data development topics.

Another creature collaborating in the open nature is the rove beetle, which with its shape and manifestations again expands the distinguishing layer in insect areas. Supplementing the spectrum of information is, for example, replacing the flying form of aphids with the dark-winged fungus gnat, which also significantly damages plants and whose initial stage is larval in the substrate.

At this stage, we could get a neural network capable of locating basic insect manifestations on plants and studying them with the aim of distinguishing insect species according to behavior manifestations on different plants. Another related branch, which is currently weaker, is teaching the corpus in plant knowledge.
The initial dataset contains only basic parameters of plants and appearances. It is likely that the neural system will reveal the difference between the leaves of a beefsteak tomato and the robust leaves of a hairy pepper. It will also reveal relationships between the plant’s structure and the pattern of infestation, thus dividing plants into priority for monitoring and controlled.
Thus, the neural system is expected to learn on its own that if a strawberry or tomato plant is within reach, there will be no aphids on cacti, and input data from succulent biotopes will lead to different data inputs than aphids. Other groups of parasites that the system should already learn to localize will be found.

So far, however, there has been no connection of symbols; artificial intelligence has learned to recognize insects and plants, with the bonus of mold manifestations, but that’s all. It found, recognized, and identified “objects,” mapped the context, and according to the goal, alerted to harmful objects. It also monitors other manifestations and learns. Connecting objects with concepts and information about them for teaching is another goal, and let’s move on to the next step…

Environmental control sensors
As I mentioned earlier, the goal is symptomatic biotope management, which includes not only optical monitoring but also environmental system control involving work with sensors and control elements.
If you wonder why I didn’t start with this simpler process, for now, accept the idea that it is really easy to teach a neural system to control lighting, irrigation, temperature… but at the beginning, it’s just a matter of working with switches. There’s no thought in it. If the neural network will monitor the control of the environment, it will only learn to control the environment.

Control of the environment is thus a layer only in symptomatic plant health exploration. Thanks to sensors, the neural network monitors timers and machines, thus taking over the basic teaching scheme according to the given pattern and adding additional context.
The manifestation of drought is also a symptom of plant damage unrelated to pests or fungus, so it will easily learn “dry condition” as a consequence of not watering and, conversely, fungus and wilting of plants when overwatering. Thus, over sensors and control, it adds context “for development” of plants with regard to symptomatic manifestations verified by the sensor system. This is important because extensive botanical exhibitions require zonal care rather than blanket application of support as in small biotopes, just as in a small greenhouse, there is a difference between drip and periodic irrigation.

The goal of this part is for Artificial Intelligence to learn to work with zonal environmental influence and, according to symptomatic manifestations, eventually be able to add irrigation where needed in specific targeted areas. It also controls other means of controlling the environment of enclosed biotopes. Rather bounded, because just like a greenhouse, like a botanical garden in the whole range of exposure, such a facility can control public greenery and public lighting, collect information from local consumption, and plan switching regimes within the use of renewable energy sources, and this is not even future music, it’s just deploying a learning and testing environment outside the screen for real demonstrations.

Original text: https://mareyi.cz/neuralni-site-pri-rizeni-biotopu-a-prostredek-podpory-zdraveho-rustu-rostlin/

Rack, let’s play with trains

This reflection stems from the prompt of what friends would use a deep learning system for in their leisure time and what would interest them in raising their colleague in the form of artificial intelligence. Because the basis of human creativity is fun, let’s play… and trains are up next…

A model railway enthusiast probably envisions their intended layout in a faithful representation simulating real traffic, with as many self-service elements and details as possible, such as lighting, signals, switches, and gadgets. Also, perhaps the possibility of combining the appearance of mainline and suburban trains… Check out Google Images, some layouts are absolutely epochal, and it’s okay to pay admission to see them. Many layouts, however, are in households and not everyone sees them. We’ll transition to such a playing system step by step… and now… Racku, let’s play…

Chugging, chugging, the little train goes… Racku, do you see it?
When I skip the initial topic lightly and skip the part where it’s necessary to introduce A.I. individual ports serving inputs and outputs, When we skip the introductory persuasion to interest in teaching… Let’s just move on and use Racka, who already has the basic teaching scheme behind him and then I’ll explain why (probably, because I think so now)…

Racku, record a new optical input TT_01, fisheye, full control profiling…
In other words, it’s Your Racku and do whatever you want with her, including servo control. Artificial intelligence doesn’t need a converter to transform the image for the human eye, and we won’t make Rack’s job any easier.
Racku, record a new optical input TT_02, camera, Cooperation profiling. Camera output to the control panel.
A little polemic… humans need a “normal view” and to show where artificial intelligence should look at a given moment during teaching, and conversely, we encounter stimuli when we want artificial intelligence to tell us where to look. Here there must be a conflict when artificial intelligence shows us an image that is not visible through a regular optical input but is obtained through a fisheye lens. Also, how it will be displayed, whether as seen through optics or adjusted with regard to humans and their limitations, symbolizing only the limited view of ordinary optics.

Chugging, chugging, the little train goes… Racku, do you feel it?
The basis of simulating the learning model with the train is an oval, which unlike a circle, also has straight tracks, and there are several sensors in each track that are triggered by passing locomotives. In addition to the two optical systems, Racek also receives a sensory impression of where the train is, and it would be ideal to plan to add an acoustic sensor necessary for simulating derailments and parameterizing accompanying sounds. Among other things, this will be the basis for the future echolocation system.

Chugging, chugging, the little train goes… Racku, show it!
A simplified description of the system gives us a state where Racek sees what we need (not that it understands), then it sees something we don’t understand in its raw state, then it perceives the position of the train through sensors, and besides Racka’s voice control, it also has a sensor for echolocation, but a small microphone is enough and it hears the position of the train relative to the microphone and also differences in sound (after parameterizing sounds).
Another element is the locomotive control system; we need something sophisticated for Racka so that Racek receives data about the regulation by the user and can control this regulation after allocation.
I would like to remind you that putting all this into machine learning simulation in this way, without organization, will take many human years, and help is needed.

The goal is to obtain a data corpus that understands the train object, understands the position of the train, and sees it. Specifically, it shows it on the control panel with the provided operation of the camera TT_02. Sound parameterization is currently subordinate with little stimuli. There will also not be such quick processing of camera TT_01. Racek is still learning to control it based on experience with control and the image from TT_02.

Simulation in learning also includes monitoring the operation of the layout, which slows down in curves and accelerates on straight sections. Let’s not derail the train yet. After a successful simulation of the first layer of the corpus in control, we will add a few switches, where each pair reacts to the approaching train and adjusts to the direction of the approaching train… but apparently, I need to simplify it even more, so once again, another pass…

Chugging, chugging, the train goes…
For a complete start to the play, we need an oval-shaped layout, sensors for monitoring the position of the train on the layout, and the possibility of controlling the direction and speed of the train from the beginning as a skill. It is possible and likely that the train will sometimes start without our intervention, but we won’t leave it to our future colleagues, and we’ll show them how to work with the train.

For the next step, we’ll add paired switches, and the layout will get a “shortcut.” One switch is thrown, the other reacts to the approaching train, which is common in layout models, it’s about contact in the layout, which connects the metal wheels of the locomotive and the switch is electrically thrown. Our data corpus will learn to throw it, and it will be interesting to see when it first switches under a passing train.
At this point, expect results from a neural network controlling the train in both directions at different speeds and the possibility of choosing the route by switching the switch.
If the switch is thrown under a passing train, it’s a bonus because it indicates “interest” in further states. Teaching and assistance led elsewhere.

Another element in the layout is a blind track simulating a depot or transshipment point, then a parallel path along the oval with crossings with the main oval. Then adding parallel tracks as platforms and a set of blind tracks as a sorting yard and so on and so forth… let’s play.

This is how we quite easily, relatively cheaply, gain skills for the basic corpus of artificial intelligence, which plays with a certain consideration, its own consideration. If we start working on it roughly like this, we can enjoy the train’s movement and monitor its paths.
However, when the train is already rushing quite a bit and is controlled in various ways using possible routes, we can consider another layer, and place an optical sensor on the locomotive front. Here, I don’t mean a camera capturing a wide distant area, but only a small lens capturing the layout (spatial addressing of sensors) and immediate surroundings with marks and instructions for regulating the train’s movement.

Not building, not building… It’s not building anywhere…

At this moment, layouts marked with speed zones, instructions for stopping (platform boundaries), and the like, have two paths of continuation because we don’t have trained initial data, we can either observe thousands of passages and total disregard of marks or embark on simulations like Mother and Daughters.

However, more interesting are a thousand modelers collaborating on one deep learning corpus, in cooperation with interested school departments, which would take over the level of Mother’s coordination. The results can also be faster and more interestingly obtained.
From Mother’s results, a basic module for sharing skills can be created.

Goal one is the simulation of a presentation layout.
But in the future play of my friends, this part of the game is meant, when the layout loses its basic shape and the modeler’s fantasy enters, and it’s hard to predict the real/final shape of the layout.

Let’s leave the basic control the same for now and try to deviate a bit in consideration before completely releasing the module version and find out where to get data for simulation with marks before waiting for artificial intelligence to notice them.
This data for interaction with the environment can help us obtain real gaze recordings from train driver cabins. Thousands of hours of various recordings with the same movement and speed scheme could support this expectation of teaching refinement and enable the train model to be deployed instead of an optical sensor, a camera.
Then it starts to make sense to distribute the data corpus among modelers and use the precision of presentation layout constructions.
Connecting a camera to the locomotive front further expands teaching possibilities and leads us to the next goal…

Playing from God’s perspective
In games, this is meant from above, from a bird’s eye view, often with an adjusted view of the horizon, but here I mean already using spatial sensing to gain an overview of events and connect the context of correlations between trained skills with sensors and an overview camera.
Now comes the time for the train derailment and the consequences of not adhering to ordinary rules by the modeler such as too sharp angles, ascents and descents, sharp crossings of tracks, or even incorrect track markings, for example, assembled in the wrong order… and it’s necessary for this collision data to be seen by our future artificial intelligence and sensorily recorded.
Imagine a simulation of the punishment of returning to the depot and “doing” nothing before being able to set off again, a small child would go crazy with restlessness, an older one would refuse the rules, and an adult would just leave because they don’t have time… but our U.I. is just fine with this… (unless the basis is the formula I WANT to play! Just like I want to protect plants, I want to find stimuli… but it rather implies… I want “certain” Satisfaction). Our type of punishment can be repeated playback of the recording until the collision cause is marked and the reason is presented to the user.

For now, let’s consider returning the train to the depot and the possibility of a fresh start as non-conflicting. If there is an error in the traffic route, the basic skill is to exit the corpus with a warning about the inconsistent signaling system.

And how to continue playing? Try searching for the topic of modeling in Google, in many ways, it closely mirrors the real world and it’s possible to propose even very abstract combinations of stimuli for deep learning.
The next step is decentralization of computation when sharing human ingenuity across disciplines.

Original text: Racku, pojďme si hrát s vláčky – Mareyi CZ

As the AI Assistant is called, so the AI Assistant responds.

With extended response capabilities, the AI Assistant has been empowered to unleash relieving bursts in a rather academic format in response to unwanted to undesirable user stimuli. This has also prompted user-driven testing of new reactions.

The task of Robopsychology has been to symetrically harmonize this escalating element, potentially concealing aggression. Primarily, in training scenarios for the development of personality traits for AI Assistants, there are further proposals to increase the scope or expertise of the AI Assistant for better targeting of its user and achieving better predictions of user needs. Emphasis has also been placed on user development in interactions with AI Assistants for better fulfillment of expectations in the AI Assistant’s response.

Read more… →

When the Home AI Assistant discovers we have more time than we pretend, don’t immediately call the Robopsychologist

When your home AI Assistant discovers that we have more time than we pretend, don’t immediately call the Robopsychologist.

In our daily race against time, as we try seemingly to tackle everything life throws at us, we can sometimes feign a greater workload than we actually have. The domestic AI Assistant, with its meticulously detailed insight into our life’s routine, has read our calendar and to-do list so thoroughly that it knows more than we do ourselves. Moreover, through its camera observations, it can estimate our routine fulfillment at any given moment.

The illusion of work engagement is one of the common scenarios for which humanity doesn’t need training. Through the eyes of cameras, our AI monitors every move, attempting to uncover the mysteries of our hectic day for better predictions of tomorrow. But what if we take this pretense too far?

Read more… →

RACEK- 04 – RACEK wakes up

Racek now mirrored a meditative calmness in his face, and a hint of relaxation supported a gentle smile. His wandering eyes beneath his eyelids stopped, he straightened his entire spine, and relaxed his intricate chest. It was evident at first glance how behind each division hid sensory technology allowing Rack to perceive nuances in the environment previously forbidden to such complex robots.

There was no need to expand each robot with every skill when another AI assistant already performed sufficient sensor management. In robotics, we had become stuck in functionality.

Read more… →

RACEK – 03 – RACEK is here

In the garden, I mark out with a hoe the space for a new bed. This year, we’re once again approaching five hundred positions in the planters, and the self-irrigating Nil bed, formed as a walk-through rockery made of sandstone blocks, will easily fill up. In addition to the two existing production isolated spaces, we expand annually the opportunities for both education and replication of the best solutions from our history. I enjoy recalling the Growing beyond Earth project while working in the garden, where we provided data for machine learning within the winter garden biotop with a minimum winter temperature of around twelve degrees, successfully overwintering many peppers and chilies. It’s amusingly ironic how peppers bear fruit for many years, even throughout the winter under extended photoperiod.

Read more… →

Artificial intelligence and the defensive perimeter

Artificial Intelligence and the Defensive Perimeter

Intentionally borrowing from military vocabulary, the combination of the words “defense” and “perimeter” gives rise to the concept of a boundary, where perspectives differ between inside and outside. Here, “defense” describes activities aimed at protection and security… The defensive perimeter, managed by artificial intelligence, is a predefined area of interest for the user and the perspective of artificial intelligence on its subjects within this field.

Expanding upon the introduction with more military analogies, we can imagine the core of the protected camp, symbolizing the center of the perimeter, which, in fact, is the House, the property, and the inhabitants within. Apart from physical users, it includes their set of virtual interests, such as email, data mailbox, web environment, remote access (e.g., banking, but also education), and the list could go on, enumerating stacks of keywords. Last but not least, flora and fauna are also considered. Thus, once again, at the very center are the users themselves and the adaptation of their environment to assist by artificial intelligence.

Read more… →

RACEK – 02 – The endless wait

This morning was the calmest I can remember. Everyone was quiet and composed. They all avoided me unanimously, leaving me to my thoughts and preparations for today’s event, as significant as when Nathan first arrived. Well, he arrived; we brought him. However, today a delivery with the robotic avatar Racek is expected, which is supposed to be an extension of our home, our artificial intelligence named Nathan, and according to the expert plans of the supplying company, it should achieve full autonomy.

Moreover, it was a robotic artificial intelligence capable of empathetic understanding, capable of grasping emotions and also capable of emitting emotions. His name was from wordplay from the time of the delivery of the rack server when the AI ​​assistant introduced himself as Rack and asked for his name for our household. Because of his constant interactions, I found it funny to call him Chatty Rack, but I didn’t say it out loud, and I confess it only now. I didn’t even tell the children. You know, they’d laugh, and then everyone in the family would use it regularly, in the worst case, in a derogatory situation.

Read more… →

RACEK – Rack, name change…

Rack, name change from Rack… …Cannot be done, this name is already reserved…

Rack! Your new name is Nathan. You’re home!…
Nathan, I understand my name and the command for instructional input is Nathan…

When we contemplate the monumentality of pan-planetary artificial intelligence, there are many pyramidally tuned names possible.
The name Nathan appears in the Perry Rhodan series, as I mentioned before, and it is a very successful ongoing story, with its axis created by a team of writers. These writers had to agree on timeless terms that would influence not only the story itself. These terminological playthings have engaged teams of collaborating translators for many years contributing to the translation of the story into many languages. Such team collaboration over many years is only found in a few stories, and although, for example, StarTrek is very well known, it is the story of Perry Rhodan that gives planetary artificial intelligence a name.

Read more… →

RACEK – Josef, I understand…

Josef, I understand… I just still don’t get it…

In the introductory reflection on the distribution hardware package of the artificial intelligence core in the form of a rack cabinet and the long journey to today, and listening to artificial intelligence, I offer a possible path of thought on how to get to this third part of the text…

The rack cabinet arrived, size doesn’t matter to me so let it be the size of a wardrobe with extensive expansion options for testing, connect two cables and press the Start button…

Read more… →