#opentowork

8 posts · Last used 9d

Back to Timeline
@tero@rukii.net · Mar 07, 2026
People use capital a bit haphazardly in the field of AI. When you invest your capital into fixed property or industrial machines, they generally keep their value plus produce some profit. With AI training and inference costs, you burn the capital and transform it into data. You have to be continuously vigilant in persisting this valuable data and keep track of its value. Otherwise you're just burning capital and not getting anything in return. #AI #OpenToWork
View on rukii.net
0
0
0
@tero@rukii.net · Mar 06, 2026
The world is a big ship which doesn't turn on a dime. As Gibson said, the future isn't evenly distributed. That's why there are not only multiple different strategies in the global AI transformation, but also multiple different realities. The weight of the technological singularity is bending the reality so that different businesses live in completely different worlds. Should you build frontier models? Maybe. Maybe you can see a niche for specialist frontier model? Go for it! Do you think you can displace established B2B or SaaS with AI engineered solutions? Awesome! Just keep in mind that some of the opportunities we see are not really mirages, but will disappear once the AI capabilities improve. The reality is bending, faster every quarter. In the times of change, it makes sense to get back to basics and seek security from unchanging truths. For example that data containing valuable experience needs to be generated distributed across the world and that cannot be done by an AI enclosed within a closed data center. A topology of data value creation networks follows, and you can use this as a guiding map on where you are and where you want to be. This will form the fabric of the future economies. But it will take time for this to happen and propagate everywhere, so you need to make rational choices in these time-sensitive times to get where you want to be. #AI #AGI #OpenToWork
View on rukii.net
0
0
0
@tero@rukii.net · Mar 04, 2026
Human cerebellum is a very important brain structure for robotics. In Finnish they are called "the small brains" as they form a sort of a separate brain-like structure in the back of the head where the brains join the spinal column. Cerebellum contains more neurons than the rest of the brain combined. It approximates supervised learning in its main function of modulating motor control, or mapping motor cortex intent into the actual real-time muscle control. That is why its function is so relevant for modern robotics. What it does is that it gets the motor intent from the rest of the brain and tries to predict what sort of largely proprioceptive (posture sense), vestibular and visual result this action should lead to and especially the timings of these outcomes. When the sensory signal comes back, the cerebellum computes the error in the prediction and tunes the motor control signal mapping appropriately. So, the brain motor control is largely proprioceptively and sensory-coded intents, and cerebellum translates or modulates these intents into fine-grained motor control. These structures have inspired and continue to inspire a lot of embodied systems methods in modern robotic AI. #robotics #AI #OpenToWork
View on rukii.net
0
0
1
@tero@rukii.net · Mar 04, 2026
One of the harder problems about robotic embodiments is safety. How to guarantee standard-compliant and effective guardrails for generalist robots which are mobile and not limited in the tools they can use? For example, it is practical to install light curtains for industrial robots to prevent anyone from getting into their working area when they are active. But for mobile robots, they can be anywhere, and you can't build a safe operating space for them. Even if your robot is weak in its joints and has no sharp corners, all bets are off once it grabs a power tool, or sits onto a driver's seat of a car. It requires a paradigm shift in safety. You aren't actually trying to limit the robot movement in a classical sense, but you're trying to make it act in a way that prevents harm from happening. In many cases this might involve actual movement rather than stopping movement. Sometimes it requires limiting something outside the robot from happening, for example, if something heavy is about to fall down in a dangerous fashion, the robot should try to stop it. This is of course against the strictly defined rules we have from classical robotic safety methods, but the reason is that those kinds of limited operating envelopes won't make generalist mobile robots safe. There are many rationales for static safe constraint envelopes for robots, for example, if a robot malfunctions, it shouldn't crush anything to death. There are still places for such constraints, but they aren't enough, and trying to approach the safety challenge with only these kinds of methods as the only tools in the toolbox won't lead to a success. The robotic safety systems shouldn't only care about the physical malfunctions of the robot itself, but also malfunctions of other things. For example, if a humanoid robot is preparing food, there might be a food oil fire, and instead of just stopping the robot should put it out. In general robots should be robust against both degradations and extensions of their embodiments to be able to function robustly in the open environment. This alone should in itself be a solid protection against physical malfunctions. If a robot can walk after having lost one leg, it should also function within reason, without causing danger, if one of its servos get stuck active. While hierarchies and layers create robust safety, the highest embodied control layer itself should be made safe, and it shouldn't lean on lower constraint envelopes to produce the safety. The robot must not step on a cat, or cause a cat to be harmed by inaction. If your robotic safety framework ceases to apply when the robot picks up a power tool, or presses the button to activate data center halon extinguishers, it's not framed correctly. #AI #robotics #UniversalEmbodiment #OpenToWork
View on rukii.net
0
0
0
@tero@rukii.net · Mar 02, 2026
Classically pre-training was done for neural networks to train the domain symmetries to them. It is possible to handcraft neural architectures to have an inductive bias for certain kinds of symmetries, like CNN layers with pooling for translation invariance. Remember that ANN type neural networks are just differentiable computation graphs. Handcrafted invariant operations are however typically very clumsy and inefficient, as you can see from classical machine vision using things like SIFT and LBP. It is also impossible in practice to handcraft operators which are invariant to more complex symmetries like perspective or time of day in photos. So, people used a neural backbone with fewer inductive biases at the outset, but used a lot of representative data signal to pre-train these networks to be able to tease and decompose the different hidden explaining variables out of them, to produce representations which are component-wise invariant to different domain symmetries. In plain language, the internal embeddings of these networks can have a specific activation pattern which encodes a cat, no matter what the time of day or the perspective is. So, nowadays we have encoder-side Transformer-type models with very few strict inductive biases, except that the signal makes a causal sequence and needs to be represented as tokens. We also have way more data than we used to have. So, what happens with representation learning? We don't learn simple symmetries anymore, but we start learning transferrable knowledge and transferrable cognitive skills as well. Some of this is because of the causal representation of the signal. Is there more? Is intelligence anything more than transferrable knowledge and cognitive skills? I don't think so. If a machine learns the decomposed representations of the symmetries and hidden explaining factors of the signal modalities, and furthermore the knowledge and cognitive skills represented in the data, we already have the holy grail of #AI in our hands. Then the question becomes to be about scaling it up and applying it to everything which is bottlenecked by knowledge and skills. And here we are. If you need help navigating the changing world under the AI driven transformation, I am #OpenToWork. I am an AI generalist with over 25 years of experience.
View on rukii.net
0
0
1
@tero@rukii.net · Feb 23, 2026
Using AI agents for screening interviews is something that is increasing. There are many challenges in those. One is that they will allow an employer to interview many more candidates. This translates to many more hours uses by candidates, who have a limited amount of them. It will also translate to a higher rejection rate for screenings when the screening is done by an AI agent. The market will adapt of course and as applicants are demanded more interview hours than they have, they will naturally start prioritizing human screenings as they have a lower rejection rate. Another is that all frontier AI models are trained to respect the human, so if persuasive speaking skills were effective for human interviews, they are many times more effective against AI agents. Why have a synchronous phone call anyway if you have AI agents? It would be more respectful for the applicants' time to do it in a textual chat, without scheduling troubles. It would make people way more willing to be AI screened if it is more convenient for them than a human recruiter call. #AI #OpenToWork
View on rukii.net
0
0
2
@tero@rukii.net · Feb 16, 2026
National economies have occasionally went through mode changes such as the change to a military command economy and back, into slave economy and back, socialist revolutions, all kinds of changes. Now we have a similar one with #AI happening. Some people don't realize that the whole framework of the economy is changing as labor is displaced with capital. They are still using the same Excel sheets to try to value investments in terms of future profits. It's not going to work. You need to play your capital to position yourself in the new model, not in the model that is being replaced. You need to be aware of the map of the economy of the near future, how it is structured along data flows and data value creation. And then plan how you're going to play your hand to get to a good position on that map. If you need advice or help, I am an AI generalist with over 25 years of experience currently #OpenToWork. Let's chat!
View on rukii.net
0
0
0
@tero@rukii.net · Feb 11, 2026
As a software engineer with over 25 years of experience I am not unused to technical skills becoming obsolete. For example, I don't need to remember Commodore 64 memory mapped control addresses anymore, and my brain still contains "POKE 53281,2" Now with automated coding tools I am simultaneously hit with both extreme speed of obsolescence in terms of what technical knowledge is actually needed and what can be offloaded to AI assistants, and also an ever growing range of technical tools actually used as now you aren't limited by the availability of specialists anymore. For now I am still finding prosperity on the thin layer of knowledge and skills which I know but AIs do not yet know, working in making AIs learn that while making more progress myself. In a way nothing is new: obsolescence has always crept up behind software engineers and they have always automated their own work. Every day is a new day, different from the past. But the pace has become inhuman. I wonder for how long can software engineers keep finding new domain knowledge faster than AIs can. For how long the process of improving the rate and scope of automation has places where a human can meaningfully contribute? As I am currently looking for new opportunities and #OpenToWork, I can't help but to wonder if this will be the last job I'll ever do. The knowledge is being created where the work happens, and this knowledge feeds the AI progress. For now the work still has a human component in it, having a significant role in this knowledge creation. But the human role is being pushed from the supply side to the demand side. More and more of the work is about asking AIs to do something. With per-token transaction fees, it's more like an act of consumption than an act of creation. More demand than supply. I am not so naive as to believe that humans will always be needed on the demand side of the equation, while for obvious reasons we want to try to keep on that saddle for as long as possible no matter how hard the bull tries to throw us off. I have heard many stories about how human contribution will keep being central and significant in our economies, but none of those seem to hold under closer scrutiny. Lots of strategies and plans on how to keep being relevant, all liable to fall apart as they meet the reality. As a fellow software professional, do you think there is still long term hope, or maybe it's just about trying to find the ship that sinks the slowest? #AI #automation
View on rukii.net
0
0
0

You've seen all posts