
All is good. No data has been harmed in the process. Now drive #8 needs to be replaced.
All is good. No data has been harmed in the process. Now drive #8 needs to be replaced.
The Podlove project once again leads the way to improve the experience and the way we interact with knowledge and thoughts. With the most current announcement and introduction of transcription-support for podcasts. Click on it right now and try it yourself.
Fulltext-search. Listening to podcasts by reading them. This-is-amazing!
Transcripts are coming
from the Podlove Publisher 2.8 announcement
Transcripts are an incredibly desirable thing to have for podcasts: they allow searching for specific parts, increase searchability by search sites when presented properly and they increase the accessibility of audio content significantly too.
However, transcripts have been considerably difficult to be created and used. Manually created transcripts are costly in terms of time and money and even if you spend the money there has been a lack of technical standards for storing and integrating transcripts into websites in a defined way.
This is now slowly changing: more and more automated speech-to-text systems are becoming available at reasonable costs and they are creating ever better transcripts with more and more languages being supported.
Still, automatic transcripts trail manually created transcripts in terms of accuracy, punctuation and so on but they are increasingly useful when they are primarily used for improving search results or helping you with your internal research when trying to find content in your older episodes.
New services are also coming up to deal with these problems by allowing users to quickly build on automatic transcripts and improve them manually in an assisted fashion. We will soon see a landscape of tools and services that will make creating transcripts easy and cheap enough for more and more podcasters so it’s time to come up with a good integration.
Last but not least, the WebVTT file format has become a de-facto common denominator for passing transcripts along, supporting time codes, speaker identification and a rudimentary set of meta data. While not perfect it’s enough to get a transcript infrastructure up and running and Podlove is leading the way.
This website is delivered to you by a single dedicated server in a datacenter in Germany. This server is old.
11:13:58 up 1320 days, 25 min, 2 users, load average: 1.87, 1.43, 1.25
uptime
And I am replacing it. While doing so I am going to take some shortcuts to lower the effort I have to put in for the move.
It will save me 2 days of work. It will mean for you: there might be some interruptions of the services provided by this website (there are more than this page…).
Pungenday 70 Discord, 2784 YOLD (May 23, 1618 AD) Prague: a few royal officials were thrown out a window of Hradcany Castle by some noblemen, but survived the fall by landing in a cart full of manure. The date for this event falls on an extremely Illuminated day of the Gregorian Calendar, 5/23.
shared calendar
In 2012 I’ve experienced streamed game play for the first time. I was a beta-user of the OnLive service which created a bit of fuzz back then.
Last week Google had announced to step into the game streaming business as well. They’ve announce Google Stadia as the Google powered game streaming platform. It would come with it’s own controller.
And this controller is the most interesting bit. We have seen video live streaming. We have seen and played streamed games. But every time we needed some piece of software or hardware that brought screen, controller and player together.
The Google Stadia controllers now do not connect to the screen in front of you. The screen, by all it knows, just shows a low-latency video/audio stream.
The controller connects to your wifi and directly to the game session. Everything you input with the controller will be directly sent to the Google Stadia session in a Google datacenter. No dedicated console hardware in between. And this will make a huge difference. Because all of a sudden the screen only is a screen. And the controller will connect to the “cloud-console” far-far away. As if it was sitting right below the screen. This will make a huge difference!
“It is difficult to get a man to understand something, when his salary depends upon his not understanding it!“
Upton Sinclair
Learning a new language is full of discoveries along the way!
As I am spending more time on learning the Japanese language the more different things seem to unlock. One of those things is the apparent fun Japanese companies have with puns/slight writing mismatches.
Like this one – I think (as I can not be 100% sure yet…learning!):
This is an advertisement in a supermarket for a laundry detergent. It is themed to an Anime called “Attack on Titan” – properly because the detergents name is Attack. So when I tried to make sense of the text I first read it wrong, of course.
Let’s look at it step-by-step:
I first started reading the Hiragana portion and make sense of it. There I made my first mistake which is to misread the first second character. For some reason my brain went for わ (wa) when I should have gone for れ (re).
Then I typed away further and came to the Kanji. I read a 活 (katsu) when it in fact was a 汚 (kitanai).
Given that you’ve typed those into Google Translate you will get very interesting results. I had a good laugh by then:
I am not sure if this is on purpose or not – as I do not yet know if I am just making a mess on this or if this is intentionally done so that, given your level of Japanese reading and attention-spent reading it, you get very different and funny results.
Any Japanese readers that can add some explanations? Am I far off with the thoughts?
Since a couple of months we are trying harder to learn a foreign language.
And as we excepted it is very hard to get a proper grasp on speaking the language. Especially since it is a very different language to our mother tongue.
And while comfortably interacting with digital assistants around the house every day in english and german the thought came up: why don’t these digital assistants help with foreign language listening and speaking training?
I mean Google Assistant answers questions in the language you have asked them. Siri and Alexa need to know upfront in which language you are going to ask questions. But at least Alexa can translate between languages…
But with all seriousness: Why do we not already have the obvious killer feature delivered? Everyone could already have a personal language training partner…
I’ve recently written about the privacy and data security state of this blog and starting today all content is being provided encrypted.
Bless the wonderful Let’s Encrypt project!
It is a simple one step process: shove unasked advertising in my face. Bonus points for loud full blast audio right of the start.
If I ever see unasked advertising that tried to be sneaky or not do sneaky I am going to block it without noticing from whom or for what it was.
But when it’s shown so often and is so intrusive that I take note of your brand. That brand is not considered for future business anymore.
That is especially for services where I am the product paying with my data.
I’ve written about the progressive web application functionalities provided by this blog. But I’ve missed to explain in all simplicity what it means for most of you trying to read.
This is where Volker explains in simple terms what to do:
Step 1: Tap this icon in your browser:
(maybe someone can send me an android icon that does this?)
Enjoy the quick access to this blog.
A bit of feedback is in on the plan to revitalize this blog. Thanks for that!
I have spent some more time this weekend on getting everything a bit tidied up.
There is the archive of >3.000 posts that I plan to review and re-categorize.
There is the big number of comments that had been made in the past and that I need to come up with a plan on how to allow/disallow/deal with comments and discussions in general on this website.
There is also the design and template aspects of this website. I switched to a different template and started to adjust it so that it shall make access to the stream of posts as easy as possible. Until then you need to wait or contact me through other means. But contacting is another post for another time.
The last Ubuntu kernel update seemingly kicked two hard disks out of a ZFS raidz – sigh. With ZFS on Linux this poses an issue:
Two hard drives that previously where in this ZFS pool named “storagepool” where reassigned a completely different device-id by Linux. So /dev/sdd became /dev/sdf and so on.
ZFS uses a specific metadata structure to encode information about that hard drive and it’s relationship to storage pools. When Linux reassigned a name to the hard drive apparently some things got shaken up in ZFS’ internal structures and mappings.
The solution was these steps
After poking around for about 2 hours the above strategy made the storage pool to start rebuilding (resilvering in ZFS speak). Well – learning something every day.
Bonus: I was not immediately informed of the DEGRADED state of the storage pool. That needs to change. A simple command now is run by cron-tab every hour.
zpool status -x | grep state: | tr –delete state: |mosquitto_pub -t house/stappenbach/server/poppyseeds/zpool -l
This pushes the ZFS storage pool state to MQTT and get’s worked on by a small NodeRed flow.
I am currently in the process of reducing my presence on the usual social networks. Here is my reasoning and how I will do it.
Facebook, Twitter, Instagram and alike are seemingly at the peak of their popularity and more and more users get more and more concerned about how their data and privacy is handled by those social networks. So am I.
Now my main concern is not so much on the privacy side. I never published anything on a social network – private or public – that I would not be published or freely distributed/leak. But:
I have published content with the intention that it would be accessible to everyone now and in the future. The increasing risk is that those publishing platforms are going to fade away and thus will render the content I had published there inaccessible.
My preferred way of publishing content and making sure that it stays accessible is this website – my personal blog.
I am doing this since 2004. The exact year that Facebook was founded. And apparently this website and it’s content has a good chance of being available longer than the biggest social network at the present time.
So what does this mean? 3 basic implications:
As you can see: This is not about a cut or abstinence. I get information out of social networks, tweet message flows. But I do not put any trust in the longevity of both the platforms and the content published there.
The next steps for me will be a complete overaul of this website. Get everything up to current standards to streamline my publishing process.
Expect a lot of content and change – and: welcome to my blog!
I had this strange problem that my car was not able to display japanese characters when confronted with them. Oh the marvels of inserting a USB stick into a car from 2009.
Now there’s no real option I know of without risking to brick the car / entertainment system of the car to get it to display the characters right.
Needless so say that my wifes car does the trick easily – of course it’s an asian car!
Anyway. I wrote a command line tool using some awesome pre-made libraries to convert Hiragana-Katakana characters to their romaji counterpart.
You can find it on github: https://github.com/bietiekay/romaji
To all techies reading this:
GIST: I am looking for interested hackers who want to help me implement a neural network that improves the accuracy of bluetooth low energy based indoor location tracking.
Longer version:
I am currently applying the last finishing touched to a house wide bluetooth low energy based location tracking system. (All of which will be opensourced)
The system consists of 10+ ESP-32 Arduino compatible WiFi/Bluetooth system-on-a-chip. At least one per room of a house.
These modules are very low powered and have one task: They scan for BLE advertisements and send the mac and manufacturer data + the RSSI (signal strength) over WiFi into specific MQTT topics.
There is currently a server component that takes this data and calculates a probable location of a seen bluetooth low energy device (like the apple watch I am wearing…). It currently is using a calibration phase to level in on a minimum accuracy. And then simple calculation matrices to identify the most probable location.
This all is very nice but since I got interested in neural networks and KI development – and I think many others might as well – I am asking here for also interested parties to join the effort.
I do have an existing set-up as well as gigabytes of log data.
I know about previous works like „Indoor location tracking system using neural network based on bluetooth“
Now I am totally new to the overal concepts and tooling and I start playing with TensorFlow right now.
If you want to join, let me know by commenting!
Did you know how dangerous Lithium-Polymer batteries can be? Well, if not treated well they literally burst in flames spontaneously.
So it’s quite important to follow a couple of guidelines to not burn down the house.
Since I am just about to start getting into the hobby of FPV quadcopter racing I’ve tried follow those guidelines and found that the smart house can help me tracking things.
Unfortunately there are not a lot of LiPo chargers available at reasonable price with computer interfaces to be monitored while charging/discharging the batteries. But there are a couple of workarounds I’ve found useful.
There’s a couple more things to it, like keeping track of charging processes in a calendar as you can see in the flowchart behind all the above.
There are a lot of things that happen in the smart house that are connected somehow.
And the smart house knows about those events happening and might suggest, or even act upon the knowledge of them.
A simple example:
In our living room we’ve got a nice big aquarium which, depending on the time of the day and season, it is simulating it’s very own little dusk-till-dawn lightshow for the pleasure of the inhabitants.
Additionally the waterquality is improved by an air-pump generating nice bubbles and enriching the water with oxygen. But that comes a cost: When you are in the room those bubbles and the hissing sound of the inverter for the “sun” produces sounds that are distracting and disturbing to the otherwise quiet room.
Now the smart home comes to the rescue:
It detects that whenever someone is entering the room and staying for longer, or powering up the TV or listening to music. Also it will log that regularly when these things happen also the aquarium air and maybe lights are turned off. Moreso they are turned back on again when the person leaves.
These correlations are what the smart house is using to identify groups of switches, events and actions that are somehow tied together. It’ll prepare a report and will recommend actions which at the push of a button can become a routine task always being executed when certain characteristics are lining up.
And since the smart house is a machine, it can look for correlations in a lot more dimensions a human could: Date, Time, Location, Duration, Sensor and Actor values (power up TV, Temperature in room < 22, Calendar = November, Windows closed => turn on the heating).
Did you notice that most calendars and timers are missing an important feature. Some information that I personally find most interesting to have readily available.
It’s the information about how much time is left until the next appointment is coming up. Even smartwatches, which should should be jack-of-all-trades in regards of time and schedule, do not display the “time until the next event”.
Now I came across this shortcoming when I started to look for this information. No digital assistant can tell me right away how much time until a certain event is left.
But the connected house also is based upon open technologies, so one can add these kind of features easily ourselves. My major use cases for this are (a) focussed work, plan quick work-out breaks and of course making sure there’s enough time left to actually get enough sleep.
As you can see in the picture attached my watch will always show me the hours (or minutes) left until the next event. I use separate calendars for separate displays – so there’s actually one for when I plan to get up and do work-outs.
Having the hours left until something is supposed to happen at a glance – and of course being able to verbally ask through chat or voice in any room of the house how long until the next appointment gives peace of mind :-).
The Internet of Things might as well become your Internet of Money. Some feel the future to be with blockchain related things like BitCoin or Ethereum and they might be right. So long there’s also this huge field of personal finances that impacts our lives allday everyday.
And if you get to think about it money has a lot of touch points throughout all situations of our lifes and so it also impacts the smart home.
Lots of sources of information can be accessed today and can help to stay on top of the things going on as well as make concious decisions and plans for the future. To a large extend the information is even available in realtime.
– cost tracking and reporting
– alerting and goal setting
– consumption and resource management
– like fuel oil (get alerted on price changes, …)
– stock monitoring alerting
– and more advanced even automated trading
– bank account monitoring, in- and outbound transactions
– expectations and planning
– budgetting
After all this is about getting away from lock-in applications and freeing your personal financial data and have a all-over dashboard of transactions, plans and status.
Water! Fire! Whenever one of those are released uncontrolled inside the house it might mean danger to life and health.
Having a couple of fish and turtle tanks spread out in the house and in addition a server rack in the basement it’s important to know when there’s a leak of water at moments notice.
As the server-room also is housing some water pumps for a well you got all sorts of dangers mixed in one location: Water and Fire hazard.
To detect water leaks all tanks and the pumps for the well are equipped with water sensors which send out an alerting signal as soon as water is detected. This signal is picked up and pushed to MQTT topics and from there centrally consumed and reacted upon.
Of course the server rack is above the water level so at least there is time to send out alerts while even power is out for the rest of the house (all necessary network and uplink equipment on it’s own batteries).
For alerting when there is smoke or a fire, the same logic applies. But for this some loud-as-hell smoke detectors are used. The smoke detectors interconnect with each other and make up a mesh for alerting. If one goes off. All go off. One of them I’ve connected to it’s very own ESP8266 which sends a detected signal to another MQTT topic effectively alerting for the event of a fire.
In one of the pictures you can see what happened when the basement water detector did detect water while the pump was replaced.
A lot of things in a household have weight, and knowing it’s weight might be crucial to health and safety.
Some of those weight applications might tie into this:
– your own body weight over a longer timespan
– the weight of your pets, weighed automatically (like on a kitty litter box)
– the weight of food and ingredients for recipes as well as their caloric and nutrition values
– keeping track of fill-levels on the base of weights
All those things are easily done with connected devices measuring weights. Like the kitty-litter box at our house weighing our cat every time. Or the connected kitchen-scale sending it’s gram measurements into an internal MQTT topic which is then displayed and added more smarts through an App on the kitchen-ipad consuming that MQTT messages as well as allowing recipe-weigh-in functions.
It’s not only surveillance but pro-active use. There are beekeepers who monitor the weight of their bee hives to see what’s what. You can monitor all sorts of things in the garden to get more information about it’s wellbeing (any plants, really).
Weekend is laundry time! The smart house knows and sends out notifications when the washing machine or the laundry dryer are done with their job and can be cleared.
Of course this can all be extended with more sensory data, like power consumption measurements at the actual sockets to filter out specific devices much more accurate. But for simple notification-alerting it’s apparently sufficient to monitor just at the houses central power distribution rack.
On the sides this kind of monitoring and pattern-matching is also useful to identify devices going bad. Think of monitoring the power consumption of a fridge. When it’s compressor goes bad it’s going to consume an increasing amount of power over time. You would figure out the malfunction before it happens.
Same for all sorts of pumps (water, oil, aquarium,…).
All this monitoring and pattern matching the smart house does so it’s inhabitants don’t have to.
We love music. We love it playing loud across the house. And when we did that in the past we missed some things happening around.
Like that delivery guy ringing the front doorbell and us missing an important delivery.
This happened a lot. UNTIL we retrofitted a little PCB to our doorbell circuit to make the house aware of ringing doorbells.
Now everytime the doorbell rings a couple of things can take place.
– push notifications to all devices, screens, watches – that wakes you up even while doing workouts
– pause all audio and video playback in the house
– take a camera shot of who is in front of the door pushing the doorbell
And: It’s easy to wire up things whatever those may be in the future.
So how do you manage all these sensors and switches, and lights, and displays and speakers…
One way has proven to be very useful and that is by using a standard calendar.
Yes, the one you got right on your smartphone or desktop.
A calendar is a simple manifestation of events in time and thus it can be used to either protocol or schedule events.
So the smart house uses calendars to:
So what am I using this calendar(s) for? Simple. It’s there to track travelling since I know when I was where by simply searching the calendar (screenshot). It’s easy to make out patterns and times of things happening since a calendar/timeline view feels natural. Setting on/off times and such is just a bliss if you can make it from your phone in an actual calendar rather than a tedious additional app or interface.
And of course: the house can only be smart about things when it has a way to gather and access that data. Reacting to it’s inhabitants upcoming and previous events adds several levels of smartness.
We all know it: After a long day of work you chilled out on your bean bag and fell asleep early. You gotta get up and into your bed upstairs. So usually light goes on, you go upstairs, into bed. And there you have it: You’re not sleepy anymore.
Partially this is caused by the light you turned on. If that light is bright enough and has the right color it will wake you up no matter what.
To fight this companies like Apple introduced things like “NightShift” into iPhones, iPads and Macs.
“Night Shift uses your computer’s clock and geolocation to determine when it’s sunset in your location. It then automatically shifts the colors in your display to the warmer end of the spectrum.”
Simple, eh?. Now why does your house not do that to prevent you being ripped out of sleepy state while tiptoeing upstairs?
Right! This is where the smart house will be smart.
Nowadays we’ve got all those funky LED bulbs that can be dimmed and even their colours set. Why none of those market offerings come with that simple feature is beyond me:
After sunset, when turned on, default dim to something warmer and not so bright in general.
I did implement and it’s called appropriately the “U-Boot light”. Whenever we roam around the upper floor at night time, the light that follows our steps (it’s smart enough to do that) will not go full-blast but light up dim with redish color to prevent wake-up-calls.
The smart part being that it will take into account:
– movement in the house
– sunset and dawn depending on the current geographic location of the house (more on that later, no it does not fly! (yet))
– it’ll turn on and off the light according to the path you’re walking using the various sensors around anyways
Now that you got your home entertainment reacting to you making a phone call (use case #1) as well as your current position in the played audiobook (use case #3) you might want to add some more location awareness to your house.
If your house is smart enough to know where you are, outside, inside, in what room, etc. – it might as well react on the spot.
So when you leave/enter the house:
– turn off music playing – pause it and resume when you come back
– shutdown unnecessary equipment to limit power consumption when not used and start-back up to the previous state (tvs, media centers, lights, heating) when back
– arm the cameras and motion sensors
– start to run bandwidth intense tasks when no people using resources inside the house (like backing up machines, running updates)
– let the roomba do it’s thing
– switch communication coming from the house into different states since it’s different for notifications, managing lists and spoken commands and so on.
There’s a lot of things that that benefit from location awareness.
Bonus points for outside house awareness and representing that like a “Weasly clock”…“xxx is currently at work”.
Bonus points combo breaker for using an open-source service like Miataru (http://miataru.com/#tabr3) for location tracking outside the house.
So you’re listening to this audio book for a while now, it’s quite long but really thrilling. In fact it’s too long for you to go through in one sitting. So you pause it and eventually listen to it on multiple devices.
We’ve got SONOS in our house and we’re using it extensively. Nice thing, all that connected goodness. It’s just short of some smart features. Like remembering where you paused and resuming a long audio book at the exact position you stopped the last time. Everytime you would play a different title it would reset the play-position and not remember where you where.
With some simple steps the house will know the state of all players it has. Not only SONOS but maybe also your VCR or Mediacenter (later use-case coming up!).
Putting together the strings and you get this:
Whenever there’s a title being played longer than 10 minutes and it’s paused or stopped the smart house will remember who, where and what has been played and the position you’ve been at.
Whenever that person then is resuming playback the house will know where to seek to. It’ll resume playback, on any system that is supported at that exact position.
Makes listening to these things just so much easier.
Bonus points for a mobile app that does the same thing but just on your phone. Park the car, go into the house, audiobook will continue playback, just now in the house instead of the car. The data is there, why not make use of it?
p.s.: big part of that I’ve opensourced years ago: https://github.com/bietiekay/sonos-auto-bookmarker
“making your home smarter”, use case #2
know how much oil your house burns with just measuring the light of the furnace going on/off and calculating oil throughput of the valves with burn-time.
Over the period of 1 year it’s as accurate as +/- 20 liters of oil.
7 day and 30 day graphs for solar power generation, power consumption, oil burn to heat water and outside temperatures to go along with.
Having everything in a time-series-database makes such things a real blast… data wandering around all the telemetry. There are almost 300 topics to pick from and combine.
Yes, generally the solar array produces more than the whole household consumes. Except that one 26th.
Since I am frequently using the xenim streaming network service but I was missing out on the functionality to replay recent shows. With the wonderful functionality of Re-Live made available through ReliveBot I have now added this replay feature and I am using it a lot since.
Within the SONOS controller app it looks like this:
To set-up this service with your SONOS set-up just follow the instructions shown here: a new Music Service for SONOS
Source 1: xenim streaming network
Source 2: ReliveBot
Source 3: Download the Custom Service
Source 4: a new Music Service for SONOS