<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
    <title></title>
    <link rel="self" type="application/atom+xml" href="https://kaiwegner.online/atom.xml"/>
    <link rel="alternate" type="text/html" href="https://kaiwegner.online"/>
    <generator uri="https://www.getzola.org/">Zola</generator>
    <updated>2026-04-10T00:00:00+00:00</updated>
    <id>https://kaiwegner.online/atom.xml</id>
    <entry xml:lang="en">
        <title>AI Productivity Paradox - Are we actually moving slower?</title>
        <published>2026-04-10T00:00:00+00:00</published>
        <updated>2026-04-10T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/ai-productivity-paradox-are-we-actually-moving-slower/"/>
        <id>https://kaiwegner.online/blog/ai-productivity-paradox-are-we-actually-moving-slower/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/ai-productivity-paradox-are-we-actually-moving-slower/">&lt;p&gt;I’ve been looking into some recent studies and commentary on AI-assisted development. It&#x27;s decisive where and how we use these tools.&lt;&#x2F;p&gt;
&lt;p&gt;There is a difference in &quot;feeling&quot; high speed, and actually being efficient. If you ever drove an old car vs a new one - you know.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;Sometimes, tasks with AI felt more engaging than they would have been otherwise. This was especially the case for repetitive ones like writing lots of similar tests or classes. Making the tasks into an interactive game, where I try to get the agent to do all the work with minimal manual intervention, was more fun than churning out very similar code over and over. But I don’t think it was faster.&lt;&#x2F;p&gt;
&lt;p&gt;[...] if I were shooting for maximum productivity on these sorts of issues, I would spend a lot of up-front time writing detailed issue descriptions, including specific implementation suggestions.&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;domenic.me&#x2F;metr-ai-productivity&#x2F;&quot;&gt;https:&#x2F;&#x2F;domenic.me&#x2F;metr-ai-productivity&#x2F;&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Several studies say we might be falling into a &quot;perception gap&quot; that creates long-term technical debt and hidden costs.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;highlights-or-lowlights&quot;&gt;Highlights (or lowlights):&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;strong&gt;19% Slowdown&lt;&#x2F;strong&gt;: A study by METR found that while devs felt 20% faster using AI, they were actually 19% slower on complex, real-world tasks. We’re essentially trading deep work for &quot;Review Fatigue.&quot;&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Mastery Gap&lt;&#x2F;strong&gt;: Anthropic research shows that AI-assisted coding can reduce logic mastery by 17%, making it much harder for us to debug the very code we &quot;wrote&quot; just an hour later.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;&quot;Code Churn&quot; Liability&lt;&#x2F;strong&gt;: GitClear’s analysis of 200M+ lines of code shows a 60% decline in refactoring and a massive spike in &quot;churn&quot;—code that has to be deleted or fixed within two weeks.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;my-takeaways&quot;&gt;My Takeaways:&lt;&#x2F;h2&gt;
&lt;ol&gt;
&lt;li&gt;If used for complex architecture or &quot;black box&quot; logic AI is a liability.&lt;&#x2F;li&gt;
&lt;li&gt;It is a superpower for boilerplate, regex, and unit tests.&lt;&#x2F;li&gt;
&lt;li&gt;Let’s be very intentional.&lt;&#x2F;li&gt;
&lt;li&gt;Using AI to solve a problem you don&#x27;t fully understand yet, is offloading the cost to the next person.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;Direct links for those interested in the data:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;METR Study: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;domenic.me&#x2F;metr-ai-productivity&#x2F;&quot;&gt;https:&#x2F;&#x2F;domenic.me&#x2F;metr-ai-productivity&#x2F;&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Anthropic: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;arxiv.org&#x2F;html&#x2F;2601.20245v1&quot;&gt;https:&#x2F;&#x2F;arxiv.org&#x2F;html&#x2F;2601.20245v1&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;GitClear: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.gitclear.com&#x2F;coding_on_copilot_data_2024_report&quot;&gt;https:&#x2F;&#x2F;www.gitclear.com&#x2F;coding_on_copilot_data_2024_report&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;that-being-said&quot;&gt;That being said ...&lt;&#x2F;h2&gt;
&lt;p&gt;I use AI in particular cases but in others it is posing a slow-down.&lt;&#x2F;p&gt;
&lt;p&gt;Here is a concrete example: You have a publicly available framework and you wrote your own wrapper which resides in another repository and is only linked as a package.&lt;&#x2F;p&gt;
&lt;p&gt;Since you wrapped the framework and grouped certain function calls to reduce boilerplate your AI agent is now completely amiss what is happening in your wrapper functions. It cannot anticipate this and it cannot read the code to understand it. Worse even it has either no or very limited capacity to understand from examples you may provide in addition.&lt;&#x2F;p&gt;
&lt;p&gt;In my experience the training from open-source code is weighing more than your provided example.&lt;&#x2F;p&gt;
&lt;p&gt;So the agent goes ahead and tries to mock functionality using stuff it knows when it could spin up an actual instance and test against it. Or it works around your wrapper all-to-gether to hide its incompetence.&lt;&#x2F;p&gt;
&lt;p&gt;So I am just not doing things like this anymore.&lt;&#x2F;p&gt;
&lt;p&gt;When I have a test function, I ask it to create more tests to ensure the behaviour in all cases is documented and well covered. But I need to provide atleast one test function for it to work.&lt;&#x2F;p&gt;
&lt;p&gt;When I have a custom framework it does not and cannot know (eg. binary package), I provide ample context and shrink the scope of work down so it does not matter.&lt;&#x2F;p&gt;
&lt;p&gt;In a way then, I am doing a lot of thinking so the AI can do the typing ;-)&lt;&#x2F;p&gt;
&lt;p&gt;Curious where we are moving to next!&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Reclaiming Agency for the Next Generation</title>
        <published>2026-02-03T00:00:00+00:00</published>
        <updated>2026-02-03T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/reclaiming-agency-for-the-next-generation/"/>
        <id>https://kaiwegner.online/blog/reclaiming-agency-for-the-next-generation/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/reclaiming-agency-for-the-next-generation/">&lt;h2 id=&quot;the-tyranny-of-the-single-point-of-access&quot;&gt;The Tyranny of the Single Point of Access&lt;&#x2F;h2&gt;
&lt;p&gt;It is now 2026. We have spent the last few decades building a global nervous system of advanced communication. In theory, the information bottleneck should be a relic of the past; knowledge gaps should be shrinking. Yet, as I look at the interface between our children and the world, I see a disturbing regression.&lt;&#x2F;p&gt;
&lt;p&gt;Have you ever noticed how we have effectively &quot;de-skilled&quot; the childhood experience by funneling every interaction through a single, glass slab? We’ve traded tactile autonomy for a Walled Garden that a seven-year-old can&#x27;t even climb over without a parent&#x27;s biometric scan (or credit card for that matter).&lt;&#x2F;p&gt;
&lt;h2 id=&quot;how-we-modernized-dependence&quot;&gt;How We Modernized Dependence&lt;&#x2F;h2&gt;
&lt;p&gt;When we look at the daily &quot;digital friction&quot; in a child&#x27;s life, the systemic failure becomes clear. Let’s break down the current state of play:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Attention Vacuum:&lt;&#x2F;strong&gt;  At the grocery store the father is buried in a shopping app, optimizing coupons (at &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;play.google.com&#x2F;store&#x2F;apps&#x2F;details?id=de.edeka.genuss&amp;amp;gl=DE&quot;&gt;Edeka&lt;&#x2F;a&gt;). Meanwhile, his child is absorbing uncontextualized, sensationalist headlines from a nearby newspaper stand. The smartphone doesn&#x27;t just provide information; it creates a cognitive prison that prevents the parent from acting as the necessary contextualizer for the child&#x27;s reality.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Voice Command Paradox:&lt;&#x2F;strong&gt; A mother triggers a &quot;Smart Home&quot; scene via Siri. When the child tries to do the same, the AI fails to recognize the pitch or pronunciation. The result? The child is a guest in their own home, unable to even turn on a light without a mediator.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Social Gatekeeping:&lt;&#x2F;strong&gt; A child wants to play with a friend. In 1996, they’d pick up a landline. In 2026, they have to beg a parent to send a message on Facegram to another child&#x27;s parent. We have turned spontaneous social connection into a scheduled administrative task managed by parents.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Why is this a problem? Let’s look at this through the lens of &lt;strong&gt;Systemic Accessibility&lt;&#x2F;strong&gt;:&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;&lt;&#x2F;th&gt;&lt;th&gt;TheGood&lt;&#x2F;th&gt;&lt;th&gt;TheBad&lt;&#x2F;th&gt;&lt;th&gt;TheUgly&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;Control&lt;&#x2F;td&gt;&lt;td&gt;Physical switches and buttons&lt;&#x2F;td&gt;&lt;td&gt;Software layers and voice ID&lt;&#x2F;td&gt;&lt;td&gt;Total dependence on parent&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Media&lt;&#x2F;td&gt;&lt;td&gt;Physical ownership of CDs&#x2F;Cassettes&lt;&#x2F;td&gt;&lt;td&gt;Temporary licensing, subscription streams&lt;&#x2F;td&gt;&lt;td&gt;No autonomy over content or repetition&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Repair&lt;&#x2F;td&gt;&lt;td&gt;Mechanical, modular, interchangable, electrical&lt;&#x2F;td&gt;&lt;td&gt;Glued glass, proprietary, high-tech&lt;&#x2F;td&gt;&lt;td&gt;Tech is &quot;magical&quot; and disposable&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;p&gt;We have moved away from individual, accessible technologies toward a Single Point of Access: the smartphone. It is a device controlled by mega-corporations, designed to harvest attention, and governed by algorithms that a five-year-old cannot—and should not—have to navigate. Not just that, the users demand and welcome parental controls when instead they should ask why the very thing they are holding is not child appropriate in the first place.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;engineering-empowerment&quot;&gt;Engineering Empowerment&lt;&#x2F;h2&gt;
&lt;blockquote&gt;
&lt;p&gt;As a technologist, I’m &lt;strong&gt;not&lt;&#x2F;strong&gt; saying we should move to the woods. I’m saying we need to design for Human Agency. We need to give kids the &quot;Lego bricks&quot; of technology—tools they can manipulate, own, and understand without a corporate tether.&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;h2 id=&quot;how-to-decentralize-your-home-for-your-kids&quot;&gt;How to Decentralize Your Home for Your Kids:&lt;&#x2F;h2&gt;
&lt;p&gt;Tactile Audio: Move away from &quot;Smartphone-only&quot; speakers. High-quality hardware like Teufel systems or dedicated CD players allow a child to physically choose their music. It gives them the &quot;What&quot; and the &quot;When&quot;.&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Open Access Communication&lt;&#x2F;strong&gt;: Reintroduce a DECT landline or a dedicated IP phone (providers like &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;dus.net&#x2F;&quot;&gt;dus.net&lt;&#x2F;a&gt; are excellent for this). A child should be able to call the fire department or a friend without needing a parent to unlock a screen, knowing a PIN or FaceID.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Curated Digital Playgrounds&lt;&#x2F;strong&gt;: Use Steam Family features on an old laptop or tools like &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.edurino.com&quot;&gt;Edurino&lt;&#x2F;a&gt; on tablets. This provides a &quot;Sandbox&quot; where they can explore within safe, predefined boundaries rather than a limitless, predatory feed.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Physical Autonomy&lt;&#x2F;strong&gt;: Even simple &quot;low-tech&quot; fixes matter. Put their dishes on a bottom shelf. Ensure their home provides many interfaces they can master.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h2 id=&quot;awareness&quot;&gt;Awareness&lt;&#x2F;h2&gt;
&lt;p&gt;We are currently suffering from a lack of Awareness, not a lack of options. We have mistaken &quot;streamlined&quot; for &quot;better.&quot; For an adult, a smartphone is a Swiss Army knife; for a child, it is a locked box.&lt;&#x2F;p&gt;
&lt;p&gt;When you choose media and technology for your family, ask yourself: Does this tool empower my child to act independently, or does it require them to ask for my permission to exist in the digital world?&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;Choose the means that empower!&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Avoid the surveillance. Give them a world they can actually reach.&lt;&#x2F;p&gt;
&lt;p&gt;I don&#x27;t want to give anyone parenting advice, instead I expect us all to consider what is good for us and actively decide for &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;di.day&#x2F;category&#x2F;rezepte&#x2F;&quot;&gt;our own sanity&lt;&#x2F;a&gt; and that of the next generation.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;useful-links-no-affiliation&quot;&gt;Useful links (no affiliation):&lt;&#x2F;h2&gt;
&lt;p&gt;Here is a book that might help
&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;shop.digitalcourage.de&#x2F;store-products.php?seo_c=digitalcourage-buecher-und-broschueren%2Ffiktion%2F&amp;amp;seo=buch-kirschner-und-brandstatter-ada-und-zangemann&quot;&gt;Ada und Zangemann&lt;&#x2F;a&gt; to learn what hardware, software, repairability and ownership mean.&lt;&#x2F;p&gt;
&lt;p&gt;Here is a Radio with Bluetooth, DAB and &lt;strong&gt;BUTTONS&lt;&#x2F;strong&gt;: https:&#x2F;&#x2F;teufel.de&#x2F;radio-one-106138000 also it is an alarm clock and displays time and date. My kids use this to listen to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.wdrmaus.de&#x2F;hoeren&#x2F;dab-maus-radio.php5&quot;&gt;Maus Radio&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;Get a used Laptop anywhere: https:&#x2F;&#x2F;www.backmarket.de&#x2F;de-de&lt;&#x2F;p&gt;
&lt;p&gt;Use &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;store.steampowered.com&#x2F;promotion&#x2F;familysharing&#x2F;&quot;&gt;Steam Family Sharing&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Use &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.edubuntu.org&#x2F;&quot;&gt;Edubuntu&lt;&#x2F;a&gt; or any other Linux Distribution.&lt;&#x2F;p&gt;
&lt;p&gt;Leave Social Media and Streaming. Own your digital life. &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;di.day&#x2F;category&#x2F;rezepte&#x2F;&quot;&gt;#didit&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Back To Linux</title>
        <published>2024-05-01T00:00:00+00:00</published>
        <updated>2024-05-01T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/mac-to-linux/"/>
        <id>https://kaiwegner.online/blog/mac-to-linux/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/mac-to-linux/">&lt;h2 id=&quot;intro&quot;&gt;Intro&lt;&#x2F;h2&gt;
&lt;p&gt;In today&#x27;s computing age, it&#x27;s remarkable how the fundamentals of interacting with our devices and software have remained largely unchanged for over three decades. Reflecting on Microsoft Office 1.0 from 1989, one finds an uncanny familiarity with contemporary office suites. Whether it&#x27;s creative, professional tasks or gaming, the core interaction remains consistent, albeit with significant advancements in audiovisual quality.&lt;&#x2F;p&gt;
&lt;p&gt;However, amidst this stagnation, the open-source community has thrived, particularly evident in the Linux Desktop Experience, now a highly competitive alternative to proprietary systems preoccupied with ads or outright abandoned ones (looking at you Apple).&lt;&#x2F;p&gt;
&lt;p&gt;The tough part for Desktop Linux has always been hardware support which is especially drastic on laptops. Here power management is essential and can only be achieved if all the hardware is fully supported by the operating system.&lt;&#x2F;p&gt;
&lt;p&gt;Since early 2022 I was rocking the Apple Macbook Pro 14&quot; with M1 Pro and 16 GB of RAM and earlier this year (2024) I switched to a &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.notebookcheck.com&#x2F;Tuxedo-Pulse-14-Gen3-im-Test-Linux-Ultrabook-mit-AMD-Zen4-und-120-Hz-Display.793074.0.html&quot;&gt;TUXEDO Pulse 14 - Gen3&lt;&#x2F;a&gt;. This one rocks a AMD Ryzen 7 7840HS and 32GB RAM. So we are now looking at this Laptop with a perspective of coming from the Apple Macbook Pro.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;mac-to-linux&#x2F;tuxedo-pulse-14-gen-3.jpg&quot; alt=&quot;Tuxedo Pulse 14 Gen 3&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-good&quot;&gt;The Good&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;The matte-finished display ensures excellent readability.&lt;&#x2F;li&gt;
&lt;li&gt;The compute power of the device is on par with the M1 Pro I was coming from.&lt;&#x2F;li&gt;
&lt;li&gt;32 GB of RAM is ample for my use cases; very happy to have it.&lt;&#x2F;li&gt;
&lt;li&gt;The keyboard features a really good typing experience, even if you need to adjust your muscle memory when coming from a Mac for all the shortcuts.&lt;&#x2F;li&gt;
&lt;li&gt;It has proper ports, including two USB-A ports.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;the-bad&quot;&gt;The Bad&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;The trackpad is quite small and has a physical click button at the bottom, meaning click-dragging from the bottom to the top does not work.&lt;&#x2F;li&gt;
&lt;li&gt;Battery management is an activity for the user on Linux. Technically, the battery lasts 5 to 12 hours depending on what you are doing. You have to switch from power saving to performance manually. Closing the lid puts the system into standby but noticeably drains the battery during your commute (&lt;a href=&quot;&#x2F;blog&#x2F;sleep-then-hibernate&quot;&gt;FIX&lt;&#x2F;a&gt;).&lt;&#x2F;li&gt;
&lt;li&gt;If (big if here) the fan kicks in, it is noticeably louder than the one of the MacBook Pro.&lt;&#x2F;li&gt;
&lt;li&gt;The provided power supply is doing its thing. Why is the cable fixed to the provided power supply? I since switched to a more versatile solution (&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.ecosia.org&#x2F;search?tt=mzl&amp;amp;q=UGREEN%20Nexode%20100W%20USB%20C&quot;&gt;UGREEN Nexode 100W USB-C&lt;&#x2F;a&gt;).&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;the-ugly&quot;&gt;The Ugly&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;The webcam should have been left out of the device altogether; it is so bad. &lt;em&gt;I have little issue with this as I am mostly working from my home office which offers a proper external camera.&lt;&#x2F;em&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Waking from hibernate essentially is a reboot, which takes way more time than waking a Mac from sleep.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;Transitioning from an Apple MacBook Pro to the TUXEDO Pulse 14 - Gen3, powered by Linux, has been an intriguing journey. While certain aspects of the experience may not match the seamless integration of its proprietary counterpart, the TUXEDO Pulse 14 presents a compelling alternative, particularly for users invested in open-source values and seeking a robust computing environment for a lower price.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;mac-to-linux&#x2F;gnome-46-desktop.jpg&quot; alt=&quot;Gnome 46 Desktop&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;The laptop&#x27;s strengths, including its commendable compute power, ample RAM, and versatile port selection, underscore its suitability for diverse tasks, from game development in Unity to C# backend development and office productivity. Moreover, its matte-finished display enhances readability, contributing to a comfortable working environment.&lt;&#x2F;p&gt;
&lt;p&gt;However, the transition was not without its challenges. Especially the webcam and the sluggish wake-from-hibernate process have to be noted here.&lt;&#x2F;p&gt;
&lt;p&gt;Despite a few drawbacks I am happy with my &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;tuxe.do&#x2F;pulse14&quot;&gt;TUXEDO Pulse 14&lt;&#x2F;a&gt; - Gen3 Especially considering its competitive pricing and alignment with my specific use cases, I love that I can now run Linux on the go and embrace open-source innovation.&lt;&#x2F;p&gt;
&lt;p&gt;Here is the wallpaper I used for this: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;kaiwegner.artstation.com&#x2F;projects&#x2F;GeoJVQ&quot;&gt;LINK&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Sleep then Hibernate</title>
        <published>2024-04-30T00:00:00+00:00</published>
        <updated>2024-04-30T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/sleep-then-hibernate/"/>
        <id>https://kaiwegner.online/blog/sleep-then-hibernate/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/sleep-then-hibernate/">&lt;p&gt;This is useful for linux users.&lt;&#x2F;p&gt;
&lt;p&gt;With these changes when you close the laptop lid the system goes into suspend-then-hibernate. It then after 1800 seconds switches to hibernate to save battery.&lt;&#x2F;p&gt;
&lt;p&gt;Ensure hibernation works first. For this you need a swap partition atleast the size of your RAM.&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code data-lang=&quot;ini&quot;&gt;# &#x2F;etc&#x2F;systemd&#x2F;logind.conf
[Login]
HandleLidSwitch=suspend-then-hibernate
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;pre&gt;&lt;code data-lang=&quot;ini&quot;&gt;# &#x2F;etc&#x2F;systemd&#x2F;sleep.conf
[Sleep]
HibernateDelaySec=1800
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Growth</title>
        <published>2023-11-19T00:00:00+00:00</published>
        <updated>2023-11-19T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/growth/"/>
        <id>https://kaiwegner.online/blog/growth/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/growth/">&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;&#x2F;h2&gt;
&lt;p&gt;Working a sufficiently growing environment as is often the case in the tech industry poses a lot of challenges. One such challenge is to form a common understanding about what growth means for the organization.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;Growth is a material increase in economic production and consumption.&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;In this post I want to explore how I navigate that topic in my role as a Technical Director at Edurino.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;about-growth&quot;&gt;About Growth&lt;&#x2F;h2&gt;
&lt;p&gt;One of the perks of working in an evolving environment is that we are constantly building new systems and processes. But we cannot just build new infrastructure all day as this would create too much weight to carry moving forward. After all every system, every piece of infrastructure and every process requires maintenance to keep them running, training to keep every one informed and advocacy to keep everyone engaged.&lt;&#x2F;p&gt;
&lt;p&gt;To find this interesting and to not run out of breath we need to have very little attachment to what we do and very high attachment to what we want to achieve.&lt;&#x2F;p&gt;
&lt;p&gt;As a result we know we are building everything for the moment. We accept everything we build is legacy and out of date a priori.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-growth-does-to-an-organization&quot;&gt;What Growth does to an Organization&lt;&#x2F;h2&gt;
&lt;p&gt;Were we to maintain &lt;em&gt;all&lt;&#x2F;em&gt; the systems we build it would increase the demand for communication and alignment to the point were we lose all agility and cannot react to changed requirements anymore.&lt;&#x2F;p&gt;
&lt;p&gt;Adding more and more responsibilities to our organization then would scatter our focus, put a lot of responsibility on a select few. Hiring talent to spread the responsibility takes a lot of time. This is a non-solution as it drastically increases operating cost and only postpones the issue to where the complexity and size of the operation becomes too complex overall.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-do-you-do-to-adjust&quot;&gt;What do you do to adjust?&lt;&#x2F;h2&gt;
&lt;p&gt;If building and subsequent abandonment is the nature of the beast, everyone in the organization needs to understand and live that. So we need to form and &#x27;cement&#x27; a common understanding of our values within the company.&lt;&#x2F;p&gt;
&lt;p&gt;Objviously when you abandon a system or process, you have good reasoning and &lt;em&gt;let&#x27;s hope&lt;&#x2F;em&gt; data to back your decision up. You should share the insights and live transparency and open communication. This reduces chances of the involved people feeling overruled or ignored.&lt;&#x2F;p&gt;
&lt;p&gt;To enable a swift change we aim to document a little too much and if anything overcommunicate the reasoning why we abandon a system and what the new requirements are.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>A changed understanding of Interlectual Property</title>
        <published>2022-12-30T00:00:00+00:00</published>
        <updated>2022-12-30T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/a-changed-understanding-of-interlectual-property/"/>
        <id>https://kaiwegner.online/blog/a-changed-understanding-of-interlectual-property/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/a-changed-understanding-of-interlectual-property/">&lt;p&gt;In my &lt;a href=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;metaverse&#x2F;&quot;&gt;Article about the Metaverse&lt;&#x2F;a&gt; and why it is not yet here, I point at a required change to our understanding of Interlectual Property.&lt;&#x2F;p&gt;
&lt;p&gt;To reiterate, we need a changed understanding of what makes interlectual property. There is a lot going on around you already changing this and business as well as law has to play catch-up a lot.&lt;&#x2F;p&gt;
&lt;p&gt;Look at fan-edits of movies:&lt;&#x2F;p&gt;
&lt;div &gt;
    &lt;iframe
        width=&quot;640&quot; height=&quot;320&quot;
        src=&quot;https:&#x2F;&#x2F;www.youtube-nocookie.com&#x2F;embed&#x2F;DkuIHWJmqUg?si=2vbRYSqLR_nLn7a8&quot;
        title=&quot;YouTube video player&quot; 
    frameborder=&quot;0&quot; 
    allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; 
    allowfullscreen&gt;
    &lt;&#x2F;iframe&gt;
&lt;&#x2F;div&gt;
&lt;p&gt;A complete recreation of a piece of silicon from the original Amiga in Minecraft:&lt;&#x2F;p&gt;
&lt;div &gt;
    &lt;iframe
        width=&quot;640&quot; height=&quot;320&quot;
        src=&quot;https:&#x2F;&#x2F;www.youtube-nocookie.com&#x2F;embed&#x2F;WZ2KJMxfViw?si=S_t1A3RlMgx-DsCi&quot;
        title=&quot;YouTube video player&quot; 
    frameborder=&quot;0&quot; 
    allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; 
    allowfullscreen&gt;
    &lt;&#x2F;iframe&gt;
&lt;&#x2F;div&gt;
&lt;p&gt;Black Mesa&lt;&#x2F;p&gt;
&lt;div &gt;
    &lt;iframe
        width=&quot;640&quot; height=&quot;320&quot;
        src=&quot;https:&#x2F;&#x2F;www.youtube-nocookie.com&#x2F;embed&#x2F;G_TcAxAKCAI?si=4VmR-H-0Pgj4v23T&quot;
        title=&quot;YouTube video player&quot; 
    frameborder=&quot;0&quot; 
    allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; 
    allowfullscreen&gt;
    &lt;&#x2F;iframe&gt;
&lt;&#x2F;div&gt;
&lt;p&gt;and &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;youtu.be&#x2F;HJeB5_C3s98&quot;&gt;many&lt;&#x2F;a&gt; more.&lt;&#x2F;p&gt;
&lt;p&gt;Obviously I think Open-Source plays a huge role this. Not just, because it changed business and the understanding of how to make money with software, but because it changed society’s expectation about what is owned by whom.&lt;&#x2F;p&gt;
&lt;p&gt;Software is not owned by corporations anymore, instead communities and users are the actual owners. This to me seems to be a self-fulfilling prophecy, as our service-oriented product-age is routed in “the customer is always right”, which could be exagerated to “the customer dictates what the product is”.&lt;&#x2F;p&gt;
&lt;p&gt;Although this is not how things work, as obviously customers can only decide on what is available on the market, it is very obvious that customers are empowered more and more (by software especially).&lt;&#x2F;p&gt;
&lt;p&gt;Thus the distinction between customer and producer are blurred and the same goes for what is a product and what is an ingredient.&lt;&#x2F;p&gt;
&lt;p&gt;Modding, Remixing and Public Domain are very important concepts of this overall trend and I am very much looking forward what we will see happening in this space in the near future.&lt;&#x2F;p&gt;
&lt;div &gt;
    &lt;iframe
        width=&quot;640&quot; height=&quot;320&quot;
        src=&quot;https:&#x2F;&#x2F;www.youtube-nocookie.com&#x2F;embed&#x2F;Vg52-HZhrFc?si=Q5LtmLR1Hpl2fi9_&quot;
        title=&quot;YouTube video player&quot; 
    frameborder=&quot;0&quot; 
    allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; 
    allowfullscreen&gt;
    &lt;&#x2F;iframe&gt;
&lt;&#x2F;div&gt;</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Metaverse</title>
        <published>2022-12-20T00:00:00+00:00</published>
        <updated>2022-12-20T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/metaverse/"/>
        <id>https://kaiwegner.online/blog/metaverse/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/metaverse/">&lt;p&gt;In the past year (2021) especially the whole tech bubble went on and on about the metaverse. But I think it is premature to announce its conception. I have a very clear vision of what the metaverse ultimately is and thus of the prerequisites needed to actually make it happen.&lt;&#x2F;p&gt;
&lt;p&gt;Obviously there are a lot of definitions of the metaverse out there, but I think most of them are missing the point of what will be the next digital revolution. So here we go.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-next-digital-revolution&quot;&gt;The next digital revolution.&lt;&#x2F;h2&gt;
&lt;p&gt;Compared to the last digital revolution, this one will be even more drastic and it will touch even more aspects of our daily life. Not only do we need to change our understanding of how we deal with data as the world wide web did once. In addition to that we need to change our understanding of computing as well.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;The Metaverse will drastically change what privacy and property mean for hardware, software and data.&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;h2 id=&quot;the-vision&quot;&gt;The Vision.&lt;&#x2F;h2&gt;
&lt;p&gt;Let’s start at the end here. The Metaverse for me is a “compute anywhere” scenario. With the most recent (www) digital revolution, we were suddenly able to access the complete knowledge (Data) contained in the internet at the press of a button. With the next one, we will be able to harness all computational power of humanity at any time.&lt;&#x2F;p&gt;
&lt;p&gt;This will not only encompass the actual processing resources, but also the algorithms that we as humans have created. This means, that fidelity, human and contextual awareness, uniqueness, personalisation and value of the output of every computational effort and every human-machine interaction will be on a whole different level.&lt;&#x2F;p&gt;
&lt;p&gt;It will enable an almost (not down to the molecule) complete digital twin of reality where every aspect of reality is linked to every other one. But have no doubt: It will happen in the pursuit of making (more) money.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-concrete-future&quot;&gt;The concrete future.&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;strong&gt;You will be able to answer any question regarding any known detail physical aspect of our world&lt;&#x2F;strong&gt; (known meaning, there is a feasible explaination): When was this building constructed? Where did the bricks come from? Which companies were involved in the construction? Which brand and product type is the tap in the bath room? How much would a new one cost?&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;You will be able to create, change, reproduce and remix every type of data&lt;&#x2F;strong&gt; at any time everywhere: How long is this desk? Would this plant fit and thrive in my home next to the couch? What would it look like standing there? Is there another item that would fit better especially with my taste and gardening skills?&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;You will be able to synthesize and simulate any situation:&lt;&#x2F;strong&gt; You can capture and relive a moment later, or create a scenario from scratch: What if I said something different? What if we moved on and did not stop here? Am I a good public speaker? Was I convincing the others?&lt;&#x2F;p&gt;
&lt;p&gt;I was specifically focussing on the value this tech will provide the end-user in the real world, because I think this is where this vision drastically differs from what is commonly portrayed as the metaverse. It will especially not require you to buy a specific device such as a AR&#x2F;VR headset to profit from it. Instead you will have access to every bit of data and every processing unit at the same time. There will be few device dependant applications for enthusiasts, creators and&#x2F;or developers. The rest of us will be able to get access using common, existing tools (eg. Smartphones, Laptops, Smart TVs). Displays and speakers will be accessible from any device no matter if you own the actual device or not.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-this-is-not-happening-yet&quot;&gt;Why this is not happening (yet).&lt;&#x2F;h2&gt;
&lt;p&gt;While I could go on and on about the possiblities of the Metaverse, I am fully aware, that this is scratching the surface of what will be. Instead it might be more insightful to ask what we need in order to actually come closer to it.&lt;&#x2F;p&gt;
&lt;p&gt;For this version of the Metaverse to exist, I image five pillars, that are drastically different from today.&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Unlimited storage and the ethics of digital legacy and self-destructing data&lt;&#x2F;li&gt;
&lt;li&gt;Time efficient creation and scanning processes&lt;&#x2F;li&gt;
&lt;li&gt;AI needs to be ubiquitous to clean up, remix and use data anywhere.&lt;&#x2F;li&gt;
&lt;li&gt;Changed understanding of Interlectual Property&lt;&#x2F;li&gt;
&lt;li&gt;Shared computational resources&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;unlimited-storage-and&quot;&gt;Unlimited storage and&lt;&#x2F;h3&gt;
&lt;p&gt;the ethics of digital legacy&lt;&#x2F;p&gt;
&lt;p&gt;We need a way to store unlimited amounts of data and (the and is important here) know how and when to destroy them. We need to evolve our ethics around data and legacy as a digital society. Currently data ownership and privacy are a huge issue as it is handled by huge corporations acting effectively as governments in their own right. An alternative solution would be to decentralize data and provide very granular access to the encryption of it for the owners to control (very tedious) or give up ownership entirely and instead opt for a vast data lake for all human data. Here the value of the data comes not from the data itself, but from its connection to other data. And the connection can be either public (linking to other data of similar caliber) or private (linking to a specific person). Data in essence needs to be able to self-destruct once it is no longer relevant (eg. no connection can be made to another datapoint anymore or if it simply expires).&lt;&#x2F;p&gt;
&lt;h3 id=&quot;time-efficient-creation-and-scanning-processes&quot;&gt;Time efficient creation and scanning processes&lt;&#x2F;h3&gt;
&lt;p&gt;The speed at which data (especially 3D-Data) is created especially using AI, photogrammetry and similar non traditional techniques is increasing. However we are still missing a consumer device that makes the scanning process a commodity. As the smartphone did with taking a photo, it should be as easy as point and tap. In addition, we need tools&#x2F;ways to enhance this data later (say if someone scans the same object in better conditions or data can be augmented better because a new connection with other data was made). To make sure it can be augmented by someone else, we need an agreed-upon, open data format for this, which we are lacking right now. Only then will we be able to effectively create a digital twin of our physical world.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;ai-needs-to-be-ubiquitous&quot;&gt;AI needs to be ubiquitous&lt;&#x2F;h3&gt;
&lt;p&gt;As has become apparent in the past few years, AI will define a new set of tools, we can use to create, access and manipulate data. Current AI solutions however rely on closed-source implementations run by specific companies working on data-sets, that are also not open. It is my strong belief, that by keeping these solutions closed, we are denied access to the single most important turning point in computing history. Only by opening up the AIs can we begin to build trust as a society and yield the innovation we as a society are capable of.&lt;&#x2F;p&gt;
&lt;p&gt;That being said, at some point in the near future we will see AI enabled tools for every digital process you can imagine. You will be able to mix, enhance and transform every possible data set using AI.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;changed-understanding-of-interlectual-property&quot;&gt;Changed understanding of Interlectual Property&lt;&#x2F;h3&gt;
&lt;p&gt;We can already see a slight drift in how IP is managed today. Using IP in other third party worlds is already happening. This marks only the beginning though: Today you can ask an AI-prompt what a Super-Mario Mug would look like or how Jurrassic Park in Minecraft would be built as a world. And we will see more and more of that. In the past, the ways IP was used in third-party media was limited to the creative understanding of the people involved in the making of said media and the original IP. We will however see a shift where every third party IP can be used in any media for its own good. And the owner of the property will be able to benefit from it nonetheless. So it is not IP that is going away (or making money with it) – It is rather the way it is limited and controlled that will change.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;shared-computational-resources&quot;&gt;Shared computational resources&lt;&#x2F;h3&gt;
&lt;p&gt;Today access to computational power is still paid for and you can frequently read news about, who is building the next super computer. However, we will truly unlock the next big digital revolution, when we start sharing computational resources. We have to understand the resources we have on this planet as shared in general to be able to maneuver the climate crisis. The same is true for computational power: if we want to unlock the next level of our digital age, we have to give up ownership on computational resources, and build the tools to access all computational power everywhere with the press of a button. This would reduce complexity for edge devices, increase efficiency of the network and eliminate the tendency to throw away hardware sooner rather than later (I mentioned the climate already).&lt;&#x2F;p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;I can say confidently, that we are not at what I would consider the metaverse, it certainly does not need AR&#x2F;VR devices and it is not run by the company calling itself “meta”. I know, that we have a lot ahead of us to make the next leap, but I am also confident, we will find these ways, evolve as a society and generate a lot of benefits along the way for everyone involved. I especially think, that we need to first build a world-wide digital society without borders or walls (however great or firery they are).&lt;&#x2F;p&gt;
&lt;p&gt;I also want to make it clear, that already any advance in the fields mentioned, is hailed as the arrival of the metaverse, but think about it: If you only advance in one of them, you are limiting the possibilities greatly and increase the risk for digital adversion and failure at the same time.&lt;&#x2F;p&gt;
&lt;p&gt;Say you advance AI but do not advance the understanding of data ownership and privacy. Or you commoditize scanning tools without make data storage and computation practically free. Each one innovation will be a solid step in its own right, yes, but the risk of breaking the digital economy is high as well, as monopolys or user adversion might be the result.&lt;&#x2F;p&gt;
&lt;p&gt;The metaverse as described here will not come as a sudden revolution and it will not be driven by a single company. It will be a gradual process a transformation of what we call www today, but it will nonetheless be vastly different from what we know now.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>How to predict the Future</title>
        <published>2022-02-14T00:00:00+00:00</published>
        <updated>2022-02-14T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/how-to-predict-the-future/"/>
        <id>https://kaiwegner.online/blog/how-to-predict-the-future/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/how-to-predict-the-future/">&lt;blockquote&gt;
&lt;p&gt;It’s not hard to predict the future. The hard part is to predict it reliably, precisely and to describe something, that is relevant today.&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;When predicting the future some go to the extreme and talk about &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;hackaday.com&#x2F;2022&#x2F;02&#x2F;14&#x2F;predicting-the-future-hows-that-working-out&#x2F;&quot;&gt;“How we live in 100 years”&lt;&#x2F;a&gt;. That is especially hard, works out only in rare cases and although it leaves you in awe you have to ask: “So what?”. So here is how I go on about doing it and how I try to make sure to generate relevant, precise and reliable predictions.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Reliability&lt;&#x2F;strong&gt; in this regard relates to the question if the prediction is actually coming true. Of course this is a hard part that mainly relies on the continuity of what is going on. It implies, that things are basically progressing further along the path they have been for a while and therein lies the main limitation. It is hard to anticipate inventions, that are simply so revolutionary, they can be considered completely new. We also have to factor in socioeconomic factors (or market dynamics). Technological progress is meaningless if it is not embraced by society. Political and societal effects can stop technology advances anytime for better and for worse. Predicting these effects is very hard, as marketing, storytelling and branding play a vital part in how society reacts towards a new product.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Of Course he is going to talk autonomous vehicles…&quot; - Yes, I am.&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;A good example would be self-driving cars. Think about it for a second. Can you reliably predict, that there is a future of autonomous driving ahead? If anything, today (2022) I can think of a thousand reasons why it will not be widely (meaning on the majority of roads and driving) adopted. But what we can say reliably: All technological challenges along the way of building autonomous vehicles, will be solved for sure. There is nothing in the realm of tech holding us back from perfecting it. What are the socioeconomic effects? What inventions would invalidate (e.g. autonomous micro-trains) or accelerate (e.g. separate roads) the development?&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Precision&lt;&#x2F;strong&gt; requires us to understand the technicalities of the system (product, market or environment), that we want to predict in sufficient detail. I am deconstructing the system into all its parts (building blocks) in workshops and interviews to build a coherent understanding of it. Then I want to understand the elements, who are acting and the entities, that are acted upon to build a more broad understanding of the values each component is bringing. This lays the groundwork to build a more precise understanding of where innovation can happen, where pains are felt today and where progress is being felt today or has not been seen for a while. In short we have to understand the building blocks and processes operated through the system to form a precise understanding of its value creation process.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Relevance&lt;&#x2F;strong&gt; is what this is all about. If we are predicting the future of a grain of sand in the Baltic sea – frankly – who cares? To answer the question (who cares?) we need to form an understanding of the Stakeholders (e.g. Users, Investors, Creators) and understand the individual touch points each stakeholder has with the system. We can overlay touch points and the stakeholders interacting with them to understand, which touch points contribute most to the value creation within the system and voilà: These are the ones, which are most relevant.&lt;&#x2F;p&gt;
&lt;p&gt;What we built so far is a comprehensive understanding of the current state. So what? a) You can definitely do this with a fictional product as well (one you imagine to have in the future) b) We are not done with this topic just yet.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;how-to-predict-the-future&#x2F;Human-Centric-Design-Predicting-the-Future-1024x575.jpg&quot; alt=&quot;future = precision, relevance, reliability&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;extrapolate-into-the-future&quot;&gt;Extrapolate into the future&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;1&quot;&gt;1&lt;&#x2F;h3&gt;
&lt;p&gt;To build an understanding of the future, a good starting point is … today. Today, however is never up to what it could be. Today performance of systems is held back by legacy systems, financial and time restrictions. Restrictions, that are not inherent to the system in general, but in particular. Restrictions, that are unique to the implementation within a given environment and would be different if implemented from scratch or in another environment. To start us up, we want to eliminate these restrictions and go with an ideal version of today, as if we were to implement the system anew today.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;2&quot;&gt;2&lt;&#x2F;h3&gt;
&lt;p&gt;Looking at the building blocks and processes of this best-case scenario, we can then identify limitations (e.g. bandwidths, latencies, durations, access) within the systems. Now eliminate those limitations. Image, bandwidth or access is not a problem anymore for each individual building block in the system. Ask yourself: How would that affect the system?&lt;&#x2F;p&gt;
&lt;h3 id=&quot;3&quot;&gt;3&lt;&#x2F;h3&gt;
&lt;p&gt;Have you seen another system or industry, where this limitation was eliminated already? This is where trend reports and general insights from other industries are very helpful. You can now generate ideas of new possibilities unfolding based on a future, where one or more limitations are eliminated.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;4&quot;&gt;4&lt;&#x2F;h3&gt;
&lt;p&gt;And finally wrap this is in one or more relatable story from the perspective of the stakeholders in the system.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;how-to-predict-the-future&#x2F;Human-Centric-Design-4-Steps-to-extrapolate-the-future1-1024x576.jpg&quot; alt=&quot;four steps in a row&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;I have now used this approach multiple times and it really works quite nicely for me. If you have to predict the future for your product or strategy and use this or a similar approach, let me know!&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Distraction vs. Inspiration</title>
        <published>2022-01-17T00:00:00+00:00</published>
        <updated>2022-01-17T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/distraction-vs-inspiration/"/>
        <id>https://kaiwegner.online/blog/distraction-vs-inspiration/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/distraction-vs-inspiration/">&lt;p&gt;Have you checked your notifications on your smart watch? Was that your phone vibrating in your pocket? We are constantly distracted by our digital companions.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;Oh look! &lt;br&#x2F;&gt;
A three headed monkey!&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;distraction-vs-inspiration&#x2F;monkey-island.gif&quot; alt=&quot;monkey island&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;look-at-me&quot;&gt;Look at me!&lt;&#x2F;h2&gt;
&lt;p&gt;Being distracted by a system is a result of motivation design gone wrong. The designer of said system envisioned the user as being readily available for engagement. This seems completely normal to us today, as the behavior of switching attention to our digital devices on demand, has been learned for a long time.&lt;&#x2F;p&gt;
&lt;p&gt;For me it all started with the doorbell, that anybody passerby could use, no matter the state the homeowner was in (preferrably sweatpants or worse). Today, we are constantly distracted by attention grabbing mobile devices and wearables.&lt;&#x2F;p&gt;
&lt;p&gt;The result is, that users are becoming numb. We are turning off the system or as a worst case we are distracted and ignore reality (e.g., while driving or in a social situation).&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-are-we-distracted&quot;&gt;Why are we distracted?&lt;&#x2F;h2&gt;
&lt;p&gt;Reasons for this are WHEN WHAT HOW:&lt;&#x2F;p&gt;
&lt;h3 id=&quot;when-infrequent-information-flow&quot;&gt;WHEN: Infrequent information flow&lt;&#x2F;h3&gt;
&lt;p&gt;We are getting information in a chaotic cadence. This leaves the individual unable to plan for this information influx (contrary to watching a tv broadcast).&lt;&#x2F;p&gt;
&lt;p&gt;We are being disturbed at the worst moments 🏎 and are feeling left without any news, when it really matters 🧻.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;what-relevance-of-information&quot;&gt;WHAT: Relevance of Information&lt;&#x2F;h3&gt;
&lt;p&gt;How relevant the information is, is to be determined algorithmically by the sender, often in advance and in complete ignorance of any information about the receiver and their situation. This leads to information being irrelevant in the first place (e.g., ads) or being irrelevant “at the moment” (e.g., smarthome update information on window shade position during worship in church).&lt;&#x2F;p&gt;
&lt;h3 id=&quot;how-information-channel&quot;&gt;HOW: Information Channel&lt;&#x2F;h3&gt;
&lt;p&gt;Instead of requesting an update, users are receiving the information whether they like it or not. This technology originated in MS Exchange E-Mail notifications and is now applied to a wide variety of communication. Think about it. We are expecting the designers to know better than the user, how information is presented. Upon returning to an app? Upon unlocking the device? On demand? Push? We should give users options to change the information channel based on their preference.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;h2 id=&quot;what-s-inspirational-interaction-like&quot;&gt;What’s inspirational interaction like?&lt;&#x2F;h2&gt;
&lt;p&gt;Ever heard the phrase “kissed by the muse”? It refers to inspiration, but the term “muse” is also used for people, who acting as such.&lt;&#x2F;p&gt;
&lt;p&gt;With human interaction, we are experiencing inspiration. Quite often then it is the opposite to human machine interaction. We expect our conversation partner to surprise us with new information or new ways of presenting it.&lt;&#x2F;p&gt;
&lt;p&gt;It is what is generally considered “good conversion”. We call a slight irritation due to new information or ways of presenting it “interesting”, “challenging” or “inspiring” and we are driven towards other humans, who are providing this kind of experience to us and even better, if we this is a shared feeling.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;when-we-choose-when-to-engage&quot;&gt;WHEN We choose when to engage.&lt;&#x2F;h3&gt;
&lt;p&gt;Even if only one party has chosen to meet up with the other individual, it is socially acceptable (although not in all circumstances) to decline the interaction. Depending on the form of communication, we can choose when (sometimes even if) to engage.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;what-we-negotiate-complexity&quot;&gt;WHAT We negotiate complexity.&lt;&#x2F;h3&gt;
&lt;p&gt;During conversation between individuals, it is common practice to constantly negotiate if information is interesting (right now) or not. Sometimes it is generally not wanted, other times it is just a question of the right moment, style or level of complexity. I consider it a basic requirement of meaningful conversation to agree about the importance of the topic and thus on what the conversation is about.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;how-we-choose-our-form-of-communication&quot;&gt;HOW We choose our form of communication&lt;&#x2F;h3&gt;
&lt;p&gt;With other individuals it is ok to say no to a certain form of communication but exercise another. And usually we choose a form, that fits the relationship with that human. Some are pen pals, some are calling regularly and with others we engage on any level, because we seek their company and a very intimate relationship.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;When designing experiences (e.g. apps or more interactive experiences) we should consider the WHEN, WHAT and HOW. We are expected to provide smart experiences on our smart devices. I cannot consider interaction, that is not informed, negotiated or controlled by the user as smart. If you want to know more, you might want to continue reading on &lt;a href=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;human-aware-design&#x2F;&quot;&gt;human aware design&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Human Aware Design</title>
        <published>2022-01-11T00:00:00+00:00</published>
        <updated>2022-01-11T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/human-aware-design/"/>
        <id>https://kaiwegner.online/blog/human-aware-design/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/human-aware-design/">&lt;h2 id=&quot;motivation&quot;&gt;Motivation&lt;&#x2F;h2&gt;
&lt;p&gt;If you are a fan of Design Thinking, Product Thinking or product design in general, you are in good company here. In this post, I will share my approach to product development. I like to think it goes further than the “user centric design“ approach you probably already know.&lt;&#x2F;p&gt;
&lt;p&gt;“User centric” is often used synonymously with human centric. This is not correct. We distinguish user centric as being focused on the practical needs of a user, where human centric includes on all human needs of the individual interacting with the software. We see humans being influenced by various factors, that are often not considered in product design, when we say “well, we start from the user needs”.&lt;&#x2F;p&gt;
&lt;p&gt;Humans are driven by emotions, held back by prejudices, establish mental models and views and build opinions. To date we are not taking these aspects of our users into account. We are not measuring them, not anticipating them nor are we reacting to or predicting them.&lt;&#x2F;p&gt;
&lt;p&gt;The good news: We are at a turning point. The existing understanding of computers, which was formed in the ’70s, namely a box in which we enter data and get a result, is dead. The new understanding is one of a readily available companion, an omniscient friend, who can support the user in all situations by connecting them to the rest of the world. A companion, who is able to sense what is good for the user and what is not. Who reacts to the immediate and underlying needs of the user, rather than a set of predetermined needs, that have been decided on in a design process earlier.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;“What we see is not a direct reflection of the world,
but a mental representation of the world, that is
infused by our emotional experiences.” - Science Daily&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;h2 id=&quot;what-we-are-missing&quot;&gt;What we are missing&lt;&#x2F;h2&gt;
&lt;p&gt;The promise of user centric design is, if we were to follow these patterns correctly, we must end up with engaging experiences, that are satisfying to use, fulfil a business purpose and drive change. Yet, we often see the opposite. Solutions rarely gain the traction, they aim for, entice users and almost never evolve into something better from a poor start. Sure you are hearing the success stories of how a designer turned a product around or how a new technology positively changed an experience. Compared to the mass of products, however, these are rare individual cases. What is it then, that we are missing?&lt;&#x2F;p&gt;
&lt;p&gt;We are not understanding human emotion, feelings and behavior nor are we measuring their changes. We base our decisions on user segments and behaviour. Individual behaviour is (almost) completely invisible to us and it is generally (and for good reason) discouraged to base decisions on individual experiences.&lt;&#x2F;p&gt;
&lt;p&gt;Instead of sticking to a predefined pattern, that is rarely (and if then only by authority) updated, a more individual approach on tooling (the “how”) is required as well.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-we-are-aiming-for&quot;&gt;What we are aiming for&lt;&#x2F;h2&gt;
&lt;p&gt;We need an approach, that is adaptable to a broad range of projects and challenges. An approach with an easy-to-understand concept and a comprehensive, flexible, non-dogmatic toolset. The approach should incorporate human-aspects from the ground up. It should also fit best-practices known today and incorporate current technological developments.&lt;&#x2F;p&gt;
&lt;p&gt;When designing products, we will consider the individual user and their context in every moment. We are not capable of knowing everything in advance – and the user is neither for that matter. We will therefore need to predict and adapt at runtime.&lt;&#x2F;p&gt;
&lt;p&gt;This approach on Human Aware Design will provide various tools, that can be used individually. It will also provide tools to map highly complex systems&#x2F;environments, understand the dynamics within and model rules to mitigate undesirable aspects of these dynamics.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;at-the-core-human-aware-means-contextual-meaningful-adaptive&quot;&gt;At the Core Human Aware means Contextual, Meaningful, Adaptive&lt;&#x2F;h2&gt;
&lt;p&gt;Think about the characteristics of Human to Human interaction. You will recognise, that it is contextual aware, a meaningful interaction and adaptive in its complexity.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;I Contextual awareness&lt;&#x2F;strong&gt; is if your partner knows when to stop talking because of something more important or when to repeat something because outside noise was preventing you from understanding.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;II Meaningful Interaction&lt;&#x2F;strong&gt; is when your communication or interaction is not just purposeful, but the purpose is fully understood by all parties involved and they understand how the current interaction is contributing to the fulfilment of this purpose. Being asked for your gender during a hiring process is not meaningful, as there are generally questions on what is the purpose of this question and how it contributes to the overall success of the hiring process.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;III Adaptive Complexity&lt;&#x2F;strong&gt; is generally applied by humans, when they rephrase something to avoid jargon. Another example would be to explain something in more or less detail based on the existing understanding of a listener. Good scholars and narraters are doing this intuitively. Knowing when the audience gets interested or bored.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;contextual-awareness-is-data-driven&quot;&gt;Contextual Awareness is data driven&lt;&#x2F;h3&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;human-aware-design&#x2F;Human-Centric-Design-Contextual-Awareness-1024x575.jpg&quot; alt=&quot;Contextual Awareness&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Based on Data, we can construct an artificial understanding of the environment. What is going right now? Are we in a hurry? Is what the user is doing currently critical for their success? Or simply: Are we on the move? Are we sleeping? There is few products out there, which are acting on information like this. Apple Watch (e.g. Alarm function, automatic training detection) and iOS Focus Modes are a good example of how to do this fairly good with current technology. It is automatically determining how to act based on its environment.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;evolving-the-when-what-and-the-how-for-more-meaningful-interaction&quot;&gt;Evolving the WHEN, WHAT and the HOW for more Meaningful Interaction&lt;&#x2F;h3&gt;
&lt;p&gt;&lt;strong&gt;WHEN&lt;&#x2F;strong&gt; interaction is taking place must not always be plannable, but it must be controllable by the user. Handing over control to the user results in empowerment and deeper engagement. When interaction is taking place it is more likely for the user to be readily available. If the control is granular, users will also see very clearly what kind of interaction the system aspires to and will gain a deeper understanding of its purpose and means of action.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;WHAT&lt;&#x2F;strong&gt; In Order to make sure interaction is meaningful, we need transparency on the purpose to form an understanding of the value and the risk involved. With other humans this is negotiated. We are discussing, if something is important or not. It is the WHAT that needs to be cleared. What are we interacting upon? Is this necessary? How does this help me (the user)?&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;HOW&lt;&#x2F;strong&gt; Instead of using a form of communication (or channel) predetermined, we should aim to be capable of communication through as many channels as possible. And then let the user choose which channel to engage with (or be brave enough to choose one to start with and let the user change it later). They may even decide to switch channels mid conversation, which we should account for.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;human-aware-design&#x2F;Human-Centric-Design-Meaningful-Interaction-1024x576.jpg&quot; alt=&quot;Meaningful Interaction&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;When thinking about interaction it quickly comes down to Distraction vs. Inspiration. Personal preference of the receiver of the communication has to play a more important role in this process. Where currently we are mostly ignorant of these preferences, we need to explore ways to learn about these preferences, catalogue them and adapt if they are changing in real time.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;empathy-and-needs&quot;&gt;Empathy and Needs&lt;&#x2F;h3&gt;
&lt;p&gt;In the Design Thinking process the first stage involves developing a sense of empathy towards the people you are designing for, to gain insights into what they need, what they want, how they behave, feel, and think, and why they demonstrate such behaviors, feelings, and thoughts when interacting with products in a real-world setting.&lt;&#x2F;p&gt;
&lt;p&gt;When empathising, we are often focussed on immediate user needs. We are not accounting for longterm, permanent effects and underlying needs, we are also not accounting for contextual&#x2F;environmental needs. Nor do we take into account(especially on subsequent iterations) the needs, that the product puts into the mix.&lt;&#x2F;p&gt;
&lt;p&gt;We need to ask the right questions. On empathy and desire we are asking:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;How much time do we spend on empathising? When are we planning to do it?&lt;&#x2F;li&gt;
&lt;li&gt;What does desirability mean in the context of the user?&lt;&#x2F;li&gt;
&lt;li&gt;Why do the users have the needs we identified? What is driving those needs?&lt;&#x2F;li&gt;
&lt;li&gt;Which other needs are correlating with these needs&lt;&#x2F;li&gt;
&lt;li&gt;Which need(s) will this solution address?&lt;&#x2F;li&gt;
&lt;li&gt;How will it fit into people’s lives?&lt;&#x2F;li&gt;
&lt;li&gt;How will it appeal to them?&lt;&#x2F;li&gt;
&lt;li&gt;Will people actually want it in their lives longterm?&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;On contextual and product needs we ask:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;What does the user try to accomplish in this context?&lt;&#x2F;li&gt;
&lt;li&gt;Which other stakeholders are relevant in this context?&lt;&#x2F;li&gt;
&lt;li&gt;What are the needs of these stakeholders?&lt;&#x2F;li&gt;
&lt;li&gt;What expectations do we have towards the context in which the user is interacting with the product?&lt;&#x2F;li&gt;
&lt;li&gt;What if these expectations are false?&lt;&#x2F;li&gt;
&lt;li&gt;How does the context bind the attention of the user?&lt;&#x2F;li&gt;
&lt;li&gt;What does the product need the user to do?&lt;&#x2F;li&gt;
&lt;li&gt;When does the product need input by the user?&lt;&#x2F;li&gt;
&lt;li&gt;What does successful interaction with the product result in for the user?&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;human-aware-design&#x2F;Human-Centric-Design-Needs-Map-2-1024x575.jpg&quot; alt=&quot;Needs Map&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;In reality, needs are competing as they are directly bound to our attention. Understand, that the needs your product has (e.g.immediate: I need the user to be notified of this transaction; underlying: I want to convey to the user, that there are options to influence the transaction) are positioned in that field.&lt;&#x2F;p&gt;
&lt;p&gt;Only if you understand the underlying and immediate needs of your product namely the things it needs the user to interact with to convey its purpose, you can think about how to communicate them effectively.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;aim-at-adaptive-complexity&quot;&gt;Aim at Adaptive Complexity&lt;&#x2F;h3&gt;
&lt;p&gt;Adaptive Complexity is the ability to account for varying levels of engagement with whatever interaction you are in the process of having and to ensure a positive outcome in any case. More concretely this means, if the user is not willing or able to give the information, you need to proceed – why not guess it? If they already know how the product is to be used – why show them the tutorial?&lt;&#x2F;p&gt;
&lt;p&gt;Adaptability then is not a binary decision. It is a real effort and cost factor in creating good products. Luckily, there are steps towards Adaptability, that are already helping us design better products.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;human-aware-design&#x2F;Human-Centric-Design-Adaptive-Complexity-1024x576.jpg&quot; alt=&quot;Adaptive Complexity&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;As a next step we need to evolve our ways of building products to be even more human centric, or human aware, as I’d like to call it. We need to reshape human machine interaction to become more inspirational and less distracting. It needs to become more like human to human interaction.&lt;&#x2F;p&gt;
&lt;p&gt;I laid out some of the tools and thinking behind my approach in this post and I hope you find this useful. If you have any thoughts on how to evolve the method, if you have used it or parts of its thinking knowingly or unknowingly, please get in touch and let’s talk!&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>On Digital Responsibility</title>
        <published>2022-01-11T00:00:00+00:00</published>
        <updated>2022-01-11T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/on-digital-responsibility/"/>
        <id>https://kaiwegner.online/blog/on-digital-responsibility/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/on-digital-responsibility/">&lt;h2 id=&quot;better-performance-is-not-saving-us&quot;&gt;Better Performance is not saving us&lt;&#x2F;h2&gt;
&lt;p&gt;There is a myth amongst software developers. Performance optimisation is generally seen as an ideal. But there are various factors, that are preventing better performing solutions from reducing the footprint of digital.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;i-the-value-of-compute-power-is-decreasing-the-more-of-it-becomes-available&quot;&gt;I The value of compute power is decreasing the more of it becomes available.&lt;&#x2F;h3&gt;
&lt;p&gt;The value of performance optimisation is depending on the amount of compute performance, that is available at any given moment. And yes, the compute power is still on the rise. As a result optimising performance has more value when you start thinking about it, than it has when you are rolling it out.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;ii-the-fragmentation-of-compute-users-is-increasing-as-we-are-digitising-our-world&quot;&gt;II The fragmentation of compute users is increasing as we are digitising our world.&lt;&#x2F;h3&gt;
&lt;p&gt;Every day there are more programs deployed. The role your program is playing in all this is decreasing constantly as the digitisation moves along. As there is more and more digitised the number of programs running is increasing and thus the share of compute power you are using is decreasing subsequently reducing the value of any performance optimisation.&lt;&#x2F;p&gt;
&lt;p&gt;So. No, Performance optimisation will not save us.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;responsibility-beyond-sustainability&quot;&gt;Responsibility beyond Sustainability&lt;&#x2F;h2&gt;
&lt;p&gt;Instead of thinking only about the bare metal and power consumption, we have to think beyond the pure compute resource and consider other aspects as resources as well. We will have to change how and when we interact with software. We have to employ different software architecture and we have to ensure our users will stay sane.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;i-discourage-wasteful-behaviour&quot;&gt;I Discourage wasteful behaviour.&lt;&#x2F;h3&gt;
&lt;p&gt;With our users (including developers, as they are using software to generate software), we have to make wasteful behaviour transparent. Who knows how much energy or compute power is used by a performing a single google search? Who knows if MS Word is more efficient than OnlyOffice when checking for spelling errors? We should make this visible to enable a discussion about these aspects. Imagine having an energy meter running constantly, that shows you how much you are consuming with every action. Secondly, we have to give users the opportunity to adjust their behaviour accordingly. How easy is it to disable automatic spelling checks in your office suite? Can I disable google from searching before I hit the return key?&lt;&#x2F;p&gt;
&lt;h3 id=&quot;ii-build-a-healthy-environment&quot;&gt;II Build a healthy Environment.&lt;&#x2F;h3&gt;
&lt;p&gt;When we think about nature it is quite obvious what healthy means. Namely no human intervention in the course of what nature is doing. In particular we consider humans spreading any kind of material into nature as unhealthy. What would we consider a healthy digital environment? This feels like asking about the difference between physical and mental health. Physical health being the natural environment and mental health being a concept, where we have no clear understanding of what healthy or unhealthy actually means.&lt;&#x2F;p&gt;
&lt;p&gt;Yet there are some aspects, where I hope we can all agree they are healthy or unhealthy.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Do not pollute the digital environment of your users with irrelevant information.&lt;&#x2F;strong&gt; Consider the &lt;a href=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;human-aware-design&#x2F;&quot;&gt;human aware design&lt;&#x2F;a&gt; approach when thinking about what is relevant (e.g. meaningful interaction).&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Build systems, that foster longterm relationships.&lt;&#x2F;strong&gt; If you manage to build a system, that is not depending on short-term success (on any side), you are doing things right! Then you are considering the mental health of your users and of your team. You are correctly considering the financial sustainability of what your are doing. You are investing resources in something worthwhile as it is here to stay (for a long period of time). Prevent catering short-term goals such as rapid growth (which will eventually be cutout by churn) or high levels of engagement (which are not sustainable by the user) to name some examples.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Weigh ease of use against joy of use.&lt;&#x2F;strong&gt; When we are developing games, we have to make players learn new mechanics (e.g. rules and interactions within the game). Learning is tiresome and so we are considering our users as being in flow, if they are in a healthy balance between learning new stuff (or being inspired for that matter) and relaxing (e.g. recreational browsing or consuming). This not only helps users to learn more easily, it also improves mental health as it reduces stress.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;iii-build-trust&quot;&gt;III Build trust&lt;&#x2F;h3&gt;
&lt;p&gt;Privacy is a vital part of digital responsibility. I think however there is no merit in trying to prevent data from being gathered. Instead we need a sustainable method of building trust between users and products&#x2F;services. There are many third parties around the world involved in doing that (e.g. Trust Pilot or TÜV). I am however talking about, how today our products and services are not building trust, they are either taking it for granted or explain it. But that is not building trust.&lt;&#x2F;p&gt;
&lt;p&gt;Say you walk into a bank, the clerk will not tell you how your money is secured, how they have thick walls or how they are part of a deposit insurance. The clerk will take it for granted, that you trust them. On the other hand a lot of products are displaying trust certificates or are explaining how the company responsible is trustworthy as one of their three core user values.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;Trust has to be earned not proclaimed.&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;A product therefore should not just explain why it is trustworthy, it should be able to scale the risk on the user side accordingly. Think about that bank again. If you are unsure about their trustworthiness, you have the option to reduce the amount of money you put into that bank. You can scale down your risk of loss dramatically like that. If you are offering a service in your product, offer your potential new clients a low risk option to get started and earn your trust by reminding them at a later time, how your trust-relationship is doing.&lt;&#x2F;p&gt;
&lt;p&gt;Another tip would be to actually give the option to trade money against privacy. If you are using google analytics (and worse) to analyse the crap out of everything going on – Offer your users the option to disable all of that for money. Now it is their choice. They want privacy, they can have it with all of its benefits, but they’ll have to pay for the (potential) loss of insight on your end. Making the needs of you and your product transparent to the user is one of the principles in &lt;a href=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;human-aware-design&#x2F;&quot;&gt;human aware design&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;In my opinion Digital Responsibility should encompass the health of our users and the way we build relationships with them. Although environmental factors play a vital part with responsibility, the products we build are shaping the way (especially younger) people form their understanding of relationships, trust and the world in general. That is a huge responsibility for us as product developers to bear, but also a great challenge to work on!&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>On the Future (2022)</title>
        <published>2022-01-06T00:00:00+00:00</published>
        <updated>2022-01-06T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/on-the-future-2022/"/>
        <id>https://kaiwegner.online/blog/on-the-future-2022/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/on-the-future-2022/">&lt;p&gt;I am describing myself as a Futurist. To me, that is a person, who is a catalyst, who encourages and enables change within a project or company in order to stay (or become) more competitive in the drastically changing digital market.&lt;&#x2F;p&gt;
&lt;p&gt;There is this thing called digitisation, that is rooted in the development of the internet and is changing all sorts of industries drastically. Value chains are shortened, access to goods and processes is democratised and we are subjected to more and more cat memes.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;on-the-future-2022&#x2F;e7e.jpg&quot; alt=&quot;Welcome to the Internet. I&amp;#39;ll be your guide&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;But there is something more profound simmering beneath the surface, that only few of us are experiencing right now. In the near future, the way we interact with digital interfaces, the way these interfaces are created and the way we are generating value in the digital world will change so fundamentally, that we have to rethink what an operating system has to do, what a program is and why we (still) need humans for all of this.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;h2 id=&quot;your-users-are-actually-consumers&quot;&gt;Your users are actually consumers&lt;&#x2F;h2&gt;
&lt;p&gt;I am not (just) talking about artificial intelligence, neural networks and alike. Let me explain.&lt;&#x2F;p&gt;
&lt;p&gt;As of writing this it is the year 2022. We are generally interfacing with our digital devices in a very limited way. To start things off only few of us handling these devices know how to explicitly make them do, what they intend to. Even fewer would consider themselves programmers. Most of them are what we call users, when in actuality they are consumers. We are mostly consuming media and services, and there is nothing wrong with that.&lt;&#x2F;p&gt;
&lt;p&gt;What’s the problem then? Often, products and services are confusing their understanding of what kind of person they are serving. A consumer wants easy access, low price and high, straight forward value. A user on the other hand, actively engages, seeks a specific utility, is willing to pay a premium if the utility they are seeking is met accurately or exceptionally. The Low-Code or No-Code efforts, that are trending currently are a great example of how to better server “real” users. It is quite obvious, that users and consumers require different treatment. So know if your user is actually a consumer (or vice versa) and adapt your offering accordingly.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;on-the-future-2022&#x2F;Image1.jpg&quot; alt=&quot;The world if users were not treated as consumers&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Don’ts&lt;&#x2F;strong&gt; Do not push jargon on consumers. Do not expect your consumers to learn what you consider revolutionary concepts.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Dos&lt;&#x2F;strong&gt; Help consumers to get to the value of your service as easy and fast as possible. Reduce the obligation of the consumer to give anything that is not money.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;h2 id=&quot;we-are-stuck-with-certain-usage-patterns&quot;&gt;We are stuck with certain usage patterns&lt;&#x2F;h2&gt;
&lt;p&gt;The other limitation we are a victim of is the stagnation of what is generally called user experience. Consider Microsoft Excel of Office 365 in the year 2022. and compare it with Visicalc for the Apple II developed in 1979. The way we interact with a spreadsheet did not change (drastically) in the past 40 years. You think that is an isolated case? Think again about: Word, Photoshop, 3DsMax, Blender, Finder or Explorer, Programming Environments, Terminals. I could go on.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;on-the-future-2022&#x2F;Visicalc.png&quot; alt=&quot;Visicalc&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;div class=&quot;caption&quot;&gt;
 Source https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Spreadsheet#&#x2F;media&#x2F;File:Visicalc.png &lt;&#x2F;div&gt;
&lt;p&gt;The scary part is this: It is not because they evolved and prevailed as being the best on offer! I know, shocking. It is because of marketing, habits and questionable business decisions, that the way we are interacting with our digital devices has not evolved.&lt;&#x2F;p&gt;
&lt;p&gt;You would think, that a brand new ecosystem, with different input and output capabilities, that is created on a green field could change that. Look at the way Smart Phones work and tell me, where we have evolved. It has become harder to program your device, harder to exchange data between applications, harder to install applications other than those allowed by a central entity. And the usage patterns?&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;on-the-future-2022&#x2F;2022-01-10-13-35-30-0-473x1024.png&quot; alt=&quot;excel on mobil&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Let me make this more clear.&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;We are not generally able to program our devices to do, what we want them to &lt;em&gt;(even though the devices are more than capable to do it)&lt;&#x2F;em&gt;&lt;&#x2F;li&gt;
&lt;li&gt;We are not generally able to collaborate on content
&lt;em&gt;(even though we are constantly connected with each other)&lt;&#x2F;em&gt;&lt;&#x2F;li&gt;
&lt;li&gt;We are not generally able to access all the information easily, that is shared with us
&lt;em&gt;(even though we are constantly online)&lt;&#x2F;em&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;This will change in the near future, and here is what it takes:&lt;&#x2F;p&gt;
&lt;h2 id=&quot;a-means-to-dynamically-apply-procedures-and-algorithms&quot;&gt;A means to dynamically apply procedures and algorithms&lt;&#x2F;h2&gt;
&lt;p&gt;Solving the above however does not address the fundamental change, that is bound to happen. It would merely fulfil the promise of the current technology. What we need in addition is a means of dynamically incorporating procedures and algorithms as we see fit for purpose in every instant. Let me elaborate.&lt;&#x2F;p&gt;
&lt;p&gt;Provided it is properly trained, AI is generally good in solving very specific tasks. That is the reason why we are not seeing more AI being used in software. In general we have the same problem with any complex algorithm (complex meaning it is depending on a lot of input variable to create a valuable output). The more complex it gets, the less generic the use cases it can be applied to.&lt;&#x2F;p&gt;
&lt;p&gt;Recently the focus of building and training AIs was to show how to apply their understanding to a more broad field of applications. That is a good thing and should be pursued, but does not solve the problem, that an algorithm (including trained AI) is only useful for a very specific subset of situations and therefore its versatility is limited as well.&lt;&#x2F;p&gt;
&lt;p&gt;What is needed is a means to make the application (not training!) of such algorithms as accessible as possible. To give users and programmers an easy to access way of exchanging, tweaking and remixing algorithms, to comment on their usefulness and to bundle (and republish) them into problem solving recipes.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;an-accessible-market-place-for-interoperable-algorithms&quot;&gt;An accessible market place for interoperable algorithms&lt;&#x2F;h2&gt;
&lt;p&gt;I am talking about a market place, where it is easy enough to access all sorts of trades and asking them to apply their tools to your workpiece only to then move on to the next craftsmen to advance to the next stage.&lt;&#x2F;p&gt;
&lt;p&gt;This requires two things:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Interoperability&lt;&#x2F;strong&gt; The general interfacing of procedures and algorithms is (as of today) not solved. Data structures, that are useful within one application (today) are often not performant or incompatible with others. To solve this, a semantic understanding of data is needed, whereby the data structure can be reconfigured based on the needs of the algorithm currently working on it only to reconfigure it again for the next one. Constant reconfiguration, a semantic understanding of data and last but not least non destructive interaction are the enablers of interoperability between algorithms.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Accessibility&lt;&#x2F;strong&gt; To make procedures and algorithms accessible is not trivial either. Questions, that we need to ask ourselves: How do I find those algorithms? How are their use cases described? Are they generally destructive to the data? How do I evaluate or curate the outcome? Can I not just apply, but also train my own algorithm based on my use case?&lt;&#x2F;p&gt;
&lt;p&gt;A lot of questions, that I will pursue to answer in the future. They are however challenging our understanding of what a program is, what a task is, what the operating system has to do well and what role the user and programmer has in all of this.&lt;&#x2F;p&gt;
&lt;p&gt;What this eventually means is that a lot of tedious tasks, that we have to do manually today, can effectively be done by our digital devices. Especially reconfiguring data to make it accessible for a given set of operations (e.g. transforming Spreadsheets, restyling documents, rewriting code in another programming language or manipulating 3D data) will be automated.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>USB Boot with PowerPC Mac</title>
        <published>2020-02-19T00:00:00+00:00</published>
        <updated>2020-02-19T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/usb-boot-with-powerpc-mac/"/>
        <id>https://kaiwegner.online/blog/usb-boot-with-powerpc-mac/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/usb-boot-with-powerpc-mac/">&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;usb-boot-with-powerpc-mac&#x2F;IMG_20200209_221123_ergebnis.jpg&quot; alt=&quot;usb boot old powerpc mac&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;When you have an old PowerPC Mac but no OS DVD, you have a problem. You cannot boot from USB as these old PowerPCs only support booting from internal disks, cds or via firewire.&lt;&#x2F;p&gt;
&lt;p&gt;There is however a way to convince your old Apple’s OpenFirmware, that your usb drive is actually a cd-drive and here is how to do it.&lt;&#x2F;p&gt;
&lt;p&gt;To enter OpenFirmware hold down Command-Option-O-F.&lt;&#x2F;p&gt;
&lt;p&gt;Now in order to make this work, you need to find out which usb device your thumb drive is mapping to.&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code data-lang=&quot;bash&quot;&gt;0 &amp;gt; devalias
[...]
usb0    &#x2F;pci@f2000000&#x2F;@15
usb1    &#x2F;pci@f2000000&#x2F;@15,1
usb2    &#x2F;pci@f2000000&#x2F;@15,2
hd      &#x2F;pci@f4000000&#x2F;ata-6@d&#x2F;disk@0
[...]
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Now we know how to address our usb drive. Let’s map it to the alias “cd”. And next load the bootloader.&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code data-lang=&quot;bash&quot;&gt;0 &amp;gt; devalias cd &#x2F;pci@f2000000&#x2F;usb@15&#x2F;disk@1
0 &amp;gt; boot cd:,\System\Library\CoreServices\BootX
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;In order to test if your alias is working, you can run this command and let open firmware show you a directory listing:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code data-lang=&quot;bash&quot;&gt;0 &amp;gt; dir cd:,\ \
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Slideshow on TV with Raspberry PI</title>
        <published>2019-11-19T00:00:00+00:00</published>
        <updated>2019-11-19T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/slideshow-on-tv-with-raspberry-pi/"/>
        <id>https://kaiwegner.online/blog/slideshow-on-tv-with-raspberry-pi/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/slideshow-on-tv-with-raspberry-pi/">&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;slideshow-on-tv-with-raspberry-pi&#x2F;fire.jpg&quot; alt=&quot;FIRE on a Raspberry&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;sad-backstory&quot;&gt;Sad backstory&lt;&#x2F;h2&gt;
&lt;p&gt;For years now, I was using AppleTVs, FireTVs and even Windows PCs (originally meant for gaming) to display recent and ancient family photos.&lt;&#x2F;p&gt;
&lt;p&gt;Recently however the FireTV we were using started acting weird. Firstly it did not iterate all photos (over 11.000 of them). After its last firmware only the most recent files (100 or 200 of them) were shown. Secondly the quality of the photos saw a dramatic decrease there were artifacts of jpg compression, that I obviously did not put there.&lt;&#x2F;p&gt;
&lt;p&gt;In order to solve this issue, I used a Raspberry Pi …&lt;&#x2F;p&gt;
&lt;h2 id=&quot;happy-ending-how-to&quot;&gt;Happy ending (how-to)&lt;&#x2F;h2&gt;
&lt;p&gt;First things first: Where are the photos? All my photos are placed on my nextcloud-server. I created a samba share on the server, to access the photos from the raspberry pi. To access the photos, I setup the following entry in the &#x2F;etc&#x2F;fstab&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code data-lang=&quot;bash&quot;&gt;&#x2F;&#x2F;192.168.0.159&#x2F;3tb   &#x2F;mnt   cifs  user=kaiwegner,password=**,nounix,iocharset=utf8    0 0
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Be sure to install smbclient in order to be able to mount samba shares:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code data-lang=&quot;bash&quot;&gt;$ sudo apt-get install smbclient
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;In order to be able to mount this share automatically, we need to change some configurations on the raspberry.&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code data-lang=&quot;bash&quot;&gt;$ sudo raspi-config
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;slideshow-on-tv-with-raspberry-pi&#x2F;1574186476.png&quot; alt=&quot;Wait for Network at Boot&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;In “Boot Options” enable “Wait for Network at Boot”&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;slideshow-on-tv-with-raspberry-pi&#x2F;1574186567.png&quot; alt=&quot;Enable Autologin&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;In “Boot Options” -&amp;gt; “Desktop &#x2F;CLI” select “Desktop Autologin”&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;slideshow-on-tv-with-raspberry-pi&#x2F;1574186519.png&quot; alt=&quot;Enable GLDriver&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;In “Advanced” enable the “GL Driver” Option to enable hardware acceleration&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Now we need two packages to make this work:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code data-lang=&quot;bash&quot;&gt;$ sudo apt-get install xscreensaver xscreensaver-gl-extra
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;On the Raspbian Desktop open a terminal and run xscreensaver-demo. Where we can then select the screensaver “GLSlideshow”.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;slideshow-on-tv-with-raspberry-pi&#x2F;1574187008.png&quot; alt=&quot;XScreenSaver config 1&quot; &#x2F;&gt;
&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;slideshow-on-tv-with-raspberry-pi&#x2F;1574187018.png&quot; alt=&quot;XScreenSaver config 2&quot; &#x2F;&gt;
&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;slideshow-on-tv-with-raspberry-pi&#x2F;1574187027.png&quot; alt=&quot;XScreenSaver config 3&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Now the last piece in the puzzle is to automatically start the screensaver after boot. This is done through the autostart config file of lxde:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code data-lang=&quot;bash&quot;&gt;$ nano &#x2F;etc&#x2F;xdg&#x2F;lxsession&#x2F;LXDE-pi&#x2F;autostart
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;slideshow-on-tv-with-raspberry-pi&#x2F;1574186676.png&quot; alt=&quot;LXDE-pi&#x2F;autostart&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Add the following two lines:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code data-lang=&quot;bash&quot;&gt;@xscreensaver -no-splash
@screensaver-command -activate
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The first command will launch the screensaver deamon on the desktop. The second command will instantly activate the screensaver. voila!&lt;&#x2F;p&gt;
&lt;p&gt;Connected to the TV USB- and HDMI-Ports, the Raspberry will boot as soon as the TV is turned on. Only 10-20 seconds later, we can see our photos smoothly fading on the TV.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>About the dying of a species</title>
        <published>2018-08-02T00:00:00+00:00</published>
        <updated>2018-08-02T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/about-the-dying-of-a-species/"/>
        <id>https://kaiwegner.online/blog/about-the-dying-of-a-species/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/about-the-dying-of-a-species/">&lt;h2 id=&quot;soon-there-will-be-no-programmers-left&quot;&gt;Soon there will be no programmers left.&lt;&#x2F;h2&gt;
&lt;p&gt;When I started programming in the 2000s, I thought programmers were people who fully understood the computer and the Internet. I quickly realized that this was not the case. But is that good or bad?&lt;&#x2F;p&gt;
&lt;p&gt;Later, as I gained more experience, I always felt that there were two types of programmers.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Programming as an end in itself:&lt;&#x2F;strong&gt; The first group thinks very algorithmically. They are interested in finding an interesting solution to a problem. As a rule, this group is happy when their code runs more efficiently (runtime or memory consumption) than the standard solution. But that doesn’t have to be the case – if the code lives up to their aesthetic standard, they will be happy.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Programming as a means to an end:&lt;&#x2F;strong&gt; Programmers in the second group work primarily on creating added value for the customer. In this case, the customer can be a client or the user. The question that such a programmer asks himself with every action is: Do I create added value through my intervention [for the user].&lt;&#x2F;p&gt;
&lt;p&gt;User-centered thinking is nothing new, even if people like to portray it that way. Especially in game development this is a focus so as not to forget the fun of the game despite all the technology.&lt;&#x2F;p&gt;
&lt;p&gt;Programmers then do not fall into either camp, they are in both groups. However, depending on the challenge, they tend towards the behavior of one or the other. Both groups try to understand deeply the technologies with which they work. The difference lies in the drive. Added value vs. performance. Utility vs. elegance and so on.&lt;&#x2F;p&gt;
&lt;p&gt;Often – at least that’s how I see it – it was mainly the younger programmers who were more likely to find themselves in the second (new) camp. I thought I could make out that there was a big trend towards the new camp. Whether that is the case or not, everyone can reflect for themselves.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;is-this-relevant-in-the-future&quot;&gt;Is this relevant in the future?&lt;&#x2F;h2&gt;
&lt;p&gt;In his &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;ewus.de&#x2F;blog&#x2F;2018-06-26&#x2F;software-20&quot;&gt;blog&lt;&#x2F;a&gt; Erik Wegner wrote: “Software 1.0 means, creating algorithms through engineering. “&lt;&#x2F;p&gt;
&lt;p&gt;Ok. What about 2.0 then?: “Everything that can be automated, will be automated”.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.dev-insider.de&#x2F;bis-aufs-schreiben-von-code-kann-alles-automatisiert-werden-a-644231&#x2F;&quot;&gt;Derek Weeks&lt;&#x2F;a&gt; said: “Except for writing code, everything can be automated. “&lt;&#x2F;p&gt;
&lt;p&gt;Weell. According to &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;vimeo.com&#x2F;272696002&quot;&gt;Andrej Karpathy&lt;&#x2F;a&gt; (Director of AI, Tesla) this is a complete misjudgment. Google and Tesla presumably also Amazon, Apple and Microsoft are working feverishly to automate the programming of algorithms. With neural networks, an algorithm is generated, the result of which can be optimized through training of AI. You can also trade with the algorithms generated by AI: Higher performance in exchange for more memory consumption? Higher accuracy at the expense of performance? No problem at all if the entire program is written by an AI.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;Software development 2.0 means, optimising by classifying input data until the desired result is achieved. - &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;ewus.de&#x2F;blog&#x2F;2018-06-26&#x2F;software-20&quot;&gt;Erik Wegner&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Programmers will become AI trainers.&lt;&#x2F;strong&gt; So we help AI to find the desired result, we evaluate and curate the result.
The irony is that the future of a professional group is facing a transformation that has itself continuously transformed other professional groups over the past 50 years. Are programmers then doing away with themselves?&lt;&#x2F;p&gt;
&lt;p&gt;Luckily then design is always done by people. We (designers) know what works and what looks good! &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.fastcompany.com&#x2F;3068884&#x2F;adobe-is-building-an-ai-to-automate-web-design-should-you-worry&quot;&gt;Not only Adobe disagrees&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;Let’s hope that, as is usual with automation, at least as many new jobs are created as are made obsolete.&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>VR for the masses</title>
        <published>2016-11-30T00:00:00+00:00</published>
        <updated>2016-11-30T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/vr-for-the-masses/"/>
        <id>https://kaiwegner.online/blog/vr-for-the-masses/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/vr-for-the-masses/">&lt;p&gt;Today I went to a meetup with several VR experts from an impressive line up of companies and backgrounds. We were discussing the topic of “VR for the masses”, which I enjoyed very much at &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.space-shack.com&#x2F;&quot;&gt;https:&#x2F;&#x2F;www.space-shack.com&#x2F;&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;As I was the only game developer in the room, I think I brought a unique perspective to the talk as well. However at the same time this leaves a lot being desired as the time to talk was so limited. There are one or two things I would like to stress about VR and I think my blog is generic enough for the topic to be the right place.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;vr-for-the-masses&#x2F;IMG_0270-600x300.jpg&quot; alt=&quot;&quot; &#x2F;&gt;
&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;vr-for-the-masses&#x2F;IMG_1249-600x300.jpg&quot; alt=&quot;&quot; &#x2F;&gt;
&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;vr-for-the-masses&#x2F;IMG_1439-600x300.jpg&quot; alt=&quot;&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;We were talking about storytelling&lt;&#x2F;strong&gt; in games and how they manage to position the playing in the center of the story even without options to interact with the story directly and I was mentioning Half-Life as an example, when one quote came up which specifically caught my ear:&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;When I play a game, no matter if Half-Life or GTA V, I always skip the story to get to do the quest. – Anonymous&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;I was mentioning Half-Life specifically because it does not contain cutscenes at all. And we are talking about a game from 1998. The fact that it does not contain cutscenes was a revolution in story telling and helped to develop games as a culture for sure.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;Unlike many other games at the time, Half-Life features no &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Cutscene&quot;&gt;cutscenes&lt;&#x2F;a&gt;; – &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Half-Life_(video_game)&quot;&gt;Wikipedia&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;If you don’t have a background with game development or are critically reviewing games let me just drop the following notes for you to consider: Games are successfully implementing non-linear storytelling for over 20 years. We know how to do that. We also know how to put the the “consumer”, the “viewer”, the “player” in the center of the action or in the center of the story. We know how to talk to them directly breaking the &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Fourth_wall&quot;&gt;fourth wall&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;The second thing&lt;&#x2F;strong&gt; I want to talk about is KPIs. When releasing content – no matter if it is a game, video, text or whatever – there is no rational, valid answer to the question “What if?”.&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;What if you had not created a 360° video but a standard video instead?&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Nobody can possibly know how well a standard video compared to a 360° video does. To scientifically compare them, you would need proper A-B Testing throughout all your audiences and throughout a huge audience to get a good sample!&lt;&#x2F;p&gt;
&lt;p&gt;The amount of different factors to take into account in regards to successfully releasing content is overwhelming. I think, however this one stands out:&lt;&#x2F;p&gt;
&lt;p&gt;It is a bit like comparing how well a novel did compared to a comic book. Both videos are assembled by moving pictures, sure. But you would create the content according to the medium through which it is received. So naturally your 360° video will be different. Less jittered camera to almost no movement at all. No fastpaced cuts. Audio is special too and camera position is usually at human height. This results in completely different videos.&lt;&#x2F;p&gt;
&lt;p&gt;I honestly think the discussion was fruitious, but to everybody in VR:&lt;&#x2F;p&gt;
&lt;p&gt;Please take a look at the games industry!&lt;&#x2F;p&gt;
</content>
        
    </entry>
    <entry xml:lang="en">
        <title>Custom LEGO builds</title>
        <published>2016-10-10T00:00:00+00:00</published>
        <updated>2016-10-10T00:00:00+00:00</updated>
        
        <author>
          <name>
            
              Unknown
            
          </name>
        </author>
        
        <link rel="alternate" type="text/html" href="https://kaiwegner.online/blog/custom-lego-builds/"/>
        <id>https://kaiwegner.online/blog/custom-lego-builds/</id>
        
        <content type="html" xml:base="https://kaiwegner.online/blog/custom-lego-builds/">&lt;p&gt;This is an unconventional post, but after watching &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=6wkrZpy5sns&quot;&gt;this video&lt;&#x2F;a&gt; by &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;www.youtube.com&#x2F;user&#x2F;phreakindee&quot;&gt;LGR&lt;&#x2F;a&gt; I just had to order a lego burger along with “My first computer” by PowerPig.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;custom-lego-builds&#x2F;PA100412.jpg&quot; alt=&quot;finished custom lego builds&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;You can buy both kits here: &lt;a rel=&quot;external&quot; href=&quot;https:&#x2F;&#x2F;powerpig.ecwid.com&#x2F;&quot;&gt;https:&#x2F;&#x2F;powerpig.ecwid.com&#x2F;&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;The builds are very well documented (build guides) and it is really fun building. I actually did it together with my wife this evening. The internals of the PC and the burger are very well build and you actually build stuff that can never be seen from the outside. I like that.&lt;&#x2F;p&gt;
&lt;p&gt;The only downside might be the high cost of the builds as I had to pay import tax on the package. Anyway it was a lot of fun and having this small computer and burger really is fun 🙂&lt;&#x2F;p&gt;
&lt;div align=&quot;center&quot;&gt;
    
        &lt;a href=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;custom-lego-builds&#x2F;PA100403.jpg&quot; target=&quot;_blank&quot;&gt;
          &lt;img src=&quot;https:&amp;#x2F;&amp;#x2F;kaiwegner.online&amp;#x2F;processed_images&amp;#x2F;PA100403.664ab1d30f9a7ad1.jpg&quot; &#x2F;&gt;
        &lt;&#x2F;a&gt;
        &lt;a href=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;custom-lego-builds&#x2F;PA100404.jpg&quot; target=&quot;_blank&quot;&gt;
          &lt;img src=&quot;https:&amp;#x2F;&amp;#x2F;kaiwegner.online&amp;#x2F;processed_images&amp;#x2F;PA100404.f2d65b60432e3652.jpg&quot; &#x2F;&gt;
        &lt;&#x2F;a&gt;
        &lt;a href=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;custom-lego-builds&#x2F;PA100412.jpg&quot; target=&quot;_blank&quot;&gt;
          &lt;img src=&quot;https:&amp;#x2F;&amp;#x2F;kaiwegner.online&amp;#x2F;processed_images&amp;#x2F;PA100412.9bf1a6cfa2af5c60.jpg&quot; &#x2F;&gt;
        &lt;&#x2F;a&gt;
        &lt;a href=&quot;https:&#x2F;&#x2F;kaiwegner.online&#x2F;blog&#x2F;custom-lego-builds&#x2F;PA100417.jpg&quot; target=&quot;_blank&quot;&gt;
          &lt;img src=&quot;https:&amp;#x2F;&amp;#x2F;kaiwegner.online&amp;#x2F;processed_images&amp;#x2F;PA100417.9e951ea2d72ad013.jpg&quot; &#x2F;&gt;
        &lt;&#x2F;a&gt;
    &lt;&#x2F;div&gt;</content>
        
    </entry>
</feed>
