“Run Login-AzureRmAccount to login.” in AzureRM when already logged in with PowerShell

I worked on a Release pipeline in VSTS for some month ago. Because I experimented with AzureCLI2.0 in my Release-Template, I switch from Hosted-BuildAgent to onPremise-BuildAgent. With setting things up and working out the details on how my release can run on my local BuildAgent, I had a successfull Release-Pipline.

Today I decided to switch the BuildAgent from my local one to a remote server, that suited my needs. I installed and configured my BuildAgent like I did on the other server and checked my release pipe with a deployment of already  working Bits.
But what I didn’t expect, was the following error message:

It’s curious, because from the log you can see, that there is already a login existing. So what could that error be.

After some time consuming investigation, I found, that my server installation, regarding PowerShell and the desired modules like AzureRM and so, are installed all to different modules folder, wherefor the agent and the release tasks are – let’s call it – irritated.

You can see the list of installed modules:

The Fix: I uninstalled all Azure PowerShell modules and reinstalled them with Web Platform Installer.

An alternative maybe, is also something, that I found here (it didn’t worked for me): Blog post from Darren Robinson
Here is a solution about updating all modules, but read yourself 🙂



My experiences with ESP8266 development

Reasons for ESP8266

For some weeks ago, I bought some ESP8266-12E modules with developer boards and started developing IoT-test solutions. I also have some other dev-boards like Intel Edison or Arduino, so why to change to another?
The reason is, the Edison is much far away from being “atomic” (I mean, it has a complete yocto linux installed and therefor more features running, than I need). My Arduino Uno comes without WiFi and buying a Shield is expensive compared to ESP. For simple prototyping or home automation tasks for my own, it is sufficient. So, a neat little device, that can “concentrate” on its tasks is, what I want.
Often I am loosing time on fighting around with Edisons “complexity” and OS. With ESP I can go forward by only coding and HW development. There are no service, that can interrupt work.

Starting development

At first, I tried to find a way to start development. So, I looked around to find some useful sites, that can bring me up. I realized, that there are thousands of How-Tos and “Getting started” Guides. My starting point was a development platform called PlatformIO (http://platformio.org/). It is really easy, to write code and “deploy” it to the borad. Because I love developing software with C#, I am a little bit lazy, to start coding with C/C++. After installing PlatformIO and getting started, I found some simple example codes written in LUA. And because I was not experienced in writing LUA-Code, I decided to start with it. LUA-Script is pretty simple and easy to learn, but after a while, I found, that it consumes much resources of the ESP and I was not right convinced about the language features, therefor I changed to Javascript.

Esprino is a firmware, that enables Javascript development for the ESP. It brings a Web interface, where you can write your code and start it immediately. Also you have the possibility, to Upload the whole prepared firmware. For my purposes, it was not what I want and like to deal with. So, I switched to the next development experiences. Arduino IDE!

I am familiar with Arduino IDE and developing with C/C++. Also the reasons I wrote before, lead me to the conclusion, that this way of writing code for ESP is an adequate way, although it is not that comfortable to write code with Arduino IDE. The IDE has no OOB support for writing for ESP8266, but setup of that IDE is pretty simple. You have to add the ESP-Package-URL to the preferences for the board manager (for example: https://adafruit.github.io/arduino-board-index/package_adafruit_index.json,http://arduino.esp8266.com/stable/package_esp8266com_index.json). After setting up, I could start coding as I would normally write code for my Arduino. But after some hours of fiddling and try-error procedures, I switched to VSCode.

Working with VSCode and Arduino

For the first few days I could go with that solution – Coding in VSCode (with additional extensions like clang, C/Cpp,…) and building/uploading than with Arduino IDE. But as one can imagin, this smells and therefor cries for a better solution.
I found a nice solution for setting VSCode up to support Arduino; you can find this great article here. But I also created a PS script, that automates the setup process a little bit.

It checks, if a local folder “VisualStudioCodeArduino” exists. In case it doesn’t its clones the files from Fabien’s Git repo and copies all necessary files to their destinations. After running that script I can start calling code . in PS. With that I can start coding in VSCode and also run a build and deploy from the tasks. It really runs fine and it feels like a charm. So with that script and base code, that could be used as a starting point I put all in my GitHub repo here. With that Setup, I am able to get quickly ready for dev.

Please feel free, to modify the script or give comments of youre experiences.

Accessing sqlserver instance with CommandLine

Working with local SQLServer can sometimes be challenging, if you don’t have any tools, to access a database. For administrational reasons it could be helpful, to gain access to SQL Server you can simply use the commandline cmd.exe or powershell tools. This is nothing new, but I think, it is not so common.

So, to start open up cmd.exe and type for example

This command opens (-S) a trusted (-E) connection to you local instance of a SQLLocalDB 2012. Note: this command ist case sensitive.

Than cmd prompt for further commands “1>”. Here you can type T-SQL statements like

This prompts for a Terminator for example (GO + <Enter-Key>).

After this your SQL Server instance runs this command and results with a number of databases, that are attached.

If you like to know more about read here: https://msdn.microsoft.com/en-us/library/ms162773.aspx

Beginnersguide – Azure IoT Suite

For those, interessted in doing some really awsome things with things, I recommend having a closer look to Azure IoT Suite.

It is a kind of website, that enables you to get ready with IoT in minutes. Azure IoT Suite applicates all IoT capabillities of Azure Cloud. In form of web application, that IoT Suite offers, you can dive into world of IoT.

But let’s see how to start…

This guide shows, how to create an work with Azure ioT Suite.

And here are the prerequisits:
– Azure Subscription (use youre MSFS Account and register for a 90 day-free subscription)
– maybe some devices, if available (it’s not a must)


  1. First hook into https://www.azureiotsuite.com/ … register or sign in
  2. Next, you see the the followingimage
  3. to proceed click on the tile with the big plus on it
  4. As the next level shows, you have now two options to proceedimagehere you can either select to get into a predictive maintenance solution or into remote monitoringWhat are the differences? The “predictive maintenance” concept is based on evaluting data with machine learning, to predict issues of monitored systems. The “remote monitoring” solution contains of dashboards and monitoring tools, that also enables specific device management.
    First, I would recommend starting with “remote monitoring”, because it is easier to go for a start. Machine learning is, made really simple with Azure ML, but as topic, it is still a complex one.
  5. So click on “Remote monitoring” and enter all necessary detailsimage
  6. After you clicked on “Create solution”, Azure IoT Suite starts the deployment process.
    What it really does in background is simply gathering the sources for the WebApps and –Jobs from GitHub (https://github.com/Azure/azure-iot-remote-monitoring) and starting deployment scripts from there.
    So, if you like, you can go directly to GitHub, grab the sources and start some powershell scripts/ batch-files.
    Here is a hint: If you check the picture in step 5, you can see the provisioned components for the IoT Suite App.
    Look carefully to the SKUs (stock keeping units). IoT Hub is set to S2, an App Service with P1, another with S1 and also storage with Standard-GRS.
    Theses SKUs aren’t that cheap. So after creating the “remote monitoring”-solution, you should go to the different services and lower the units.
  7. Now Azure is creating your solution
  8. Lastly, you have to accept some authentication and access requests. On successfull clicking Smile, you can launch the app:image

Lowering prices

…and here is how!

1. First take IoT Hub. Go to http://www.portal.azure.com locate your Resource-Group (in my case BlogTT2) and click on IoTHub (“BlogTT2xxx”)

image don’t forget to save

2. next get to the App services and switch to a lower SKU like following example shows


3. also check storage. This is a big cost, so reduce it to a LRS SKU like in the following picture



4. With these tweaks you can reduce the cost from over 100$/month to round about 50$


…Hope you got everything right. Play around and get comfortable with IoT 🙂

Calling linux commandline functions from NODE.JS

Currently I’am working on a project, where device management (dm) is needed. So a part of dm is to control devices remotely. My setup is a linux based device, that is connected to Azure Clouds IoT Hub with the new DeviceManagement-Feature. That device is controled by a NodeJs-App. One task of that is, to handle a reboot signal from the cloud. The problem I had, was to break through the node.js environment and to convince the device, to reboot. After some try-outs, I came up with following simple solution – maybe someone has the same problem, than this is a good start:

1) npm install sys child_process

2) …and use it as follows:


IoT Wachstum und seine Tücke

Der Hype um das Thema “Internet der Dinge” (Internet of Things [IoT]) hat Fahrt aufgenommen. Mittlerweile ist es in fast jedem Munde. Das Verständnis darüber, was es mit diesem Thema auf sich hat, ist sehr unterschiedlich. Dennoch gibt es einen Konsens darüber, was uns IoT bringen soll; Es sollen Geräte über das Netz (meist das Internet) Daten aus unserer Umgebung (mittelbar, unmittelbar) verfügbar machen, um darauf hin Dienste bereit zu stellen, die unsere Umgebung erweitern, beziehungsweise erweitert nutzbar machen. Allerdings muss auch erwähnt sein, dass IoT nicht erst seit jetzt ein neuer Gedanke ist. Auch vor etlichen Jahren schon haben Geräte mit dem Internet “geredet”. Primär ist nur jetzt erst die Hardware sehr günstig (um es vielleicht noch heftiger zu bezeichnen: “billig”) und die Technologie in einem höheren Reifegrad (KI von Microsoft (Cortana) und Google (DeepMind) bspw. ist mittlerweile “salon-fähig”).

Abbildung 1 www.google.de – Trends (IoT,Internet of Things) [24.10.2016]

Durch Druck zum hohen Riskio – Stand heute

Durch die verstärkten Investitionen in diesen Markt, werden auch immer mehr neue Geschäftsmodelle entwickelt, die es Unternehmen ermöglichen schnell mit diversen Lösungen auf den Markt zu kommen. Allerdings befinden wir uns noch am Anfang von allem, was mit IoT zusammenhängt. Was zu zwei Darstellungen führt. Wir spielen mit Hardware- und Software-Technologien und schauen, ob dies zusammen mit den Geschäftsmodellen am Markt funktionieren (Fail-Fast-Strategie), oder wir nutzen bereits etablierte Techniken, um schnell an den Markt zu kommen und damit Geschäftsmodelle zu entwickeln. Mit der Fail-Fast-Strategie kommt es dann häufig zu dem Effekt, dass technologische Ansätze eher unausgereift, oder nicht ganz zu Ende gedacht sind. Bei der zweiten genannten Darstellung auf dem Markt, hat das Unternehmen meist eine bessere Kompetenz bezogen auf die Eingesetzten “physischen” Technologien, doch kämpft diese noch mit den neuen Technologien und Ansätzen wie zum Beispiel die Cloud.

In beiden Fällen jedoch gibt es ein gemeinsames Problem (sicher noch viele andere, was aber gerade nicht das Thema sein soll): Sicherheit. Der Druck, den Unternehmen haben, sich auf dem neuen Markt zu etablieren, neue Geschäftsfelder auszuloten oder sich einfach zu behaupten, steigt stetig, wohin gegen die Zeit zur Erforschung und disziplinierten Anwendung von geeigneten Sicherheitsstrategien zu kurz kommt.

Alte Probleme – neue Probleme

Wie der Titel trefflich darstellt, kämpft die IoT-Generation mit Kinderkrankheiten. Leider scheint sich die Redewendung “Alte Probleme sind neue Probleme” auch in diesem hochtechnologisierten Bereich wieder zu spiegeln. Dies zeigten jüngst die Vorfälle bei Twitter und Co. Die IoT-gestützen Angriffen unterlagen (http://www.n-tv.de/technik/Hacker-legt-Twitter-Spotify-und-Co-lahm-article18910386.html).

Als vorrangig Computer “Bewegungsfreiheiten” im räumlichen Sinne hatten (also keinen bis geringen Anschluss an das damalige Internet), bekamen Unternehmen die Probleme von Vernetzung zu spüren, als das Internet immer populärer wurde. Es taten sich diverse Sicherheitslücken auf, die kontinuierlich von Angreifern genutzt wurden. Häufig spielten Angriffe eine Rolle, die sich die Masse an internetfähigen Computern zunutze machten (siehe hier DDOS-Atacken [Wikipedia]). Damit hatte man zu dieser Zeit nicht rechnen können, da einfach die Erfahrung nicht vorhanden war. Ebenfalls waren die Nutzer im Internet noch recht unbeholfen und ließen Ihre Rechner ungepatcht und ungesichert (ohne Virenscanner, Firewalls, …) ins Internet. Was zu einer wilden Zeit führte. Doch nun haben die Industrie, Unternehmen und User gelernt, womit die alten Probleme nahezu aus der Welt geschafft wurden. Die Internetprovider stellen Systeme zur Verfügung, welche DDoS-Atacken standhalten können, (viele)PCs sind standardmäßig mit Firewalls und Virenscannern ausgestattet, Programmierer können moderne Entwicklungstools und Standards (SQL-Injection, Input-Validation, Encryption, …) verwenden, um Angriffen vor zu beugen.

Leider scheint es so, als hätte man nicht dazu gelernt oder als ignorierte man die Erkenntnisse aus vergangenen Zeiten. Unternehmen beschäftigen sich nicht ernsthaft oder stark genug mit den von ihnen eingesetzten Technologien oder schätzen die Zusammenhänge und Effekte beim Einsatz von vernetzten Geräten falsch ein. Dies soll kein Vorwurf sein, denn Unternehmen bessern meist erst dann nur nach, wenn etwas passiert, das den Geldbeutel belastet – man agiert kostenoptimiert. Das Resultat…, die Sicherheit für das Produkt wird vernachlässigt. Entweder, weil die Technologie bewährt und ausgereift ist (bedeutet frei übersetzt Technologie von damals) oder die modernen Technologien nicht mit all ihren Fassetten bekannt sind. (siehe hier auch Artikel:Bedrohungslandschaft 2016: IoT-Angriffe und neue Umgehungstechniken )

Probleme die heutzutage auftreten sind schon länger bekannt. Diverse Heizungshersteller mit Fernsteuer-Software zum Beispiel kennen die Problematik, aber auch andere Smarthome-Produkte-Anbieter sind und waren vor Ausnutzung von Sicherheitslücken nicht gefeit [www.av-test.org – Thema Smart Home ]


Welche Lösung gibt es aus dem Dilemma? Nun, wir müssen lernen…!
Wir müssen zum einen moderne Sicherheitstechniken einsetzen. Das heißt zum Beispiel altbewährte Hardware kann weiterhin eingesetzt werden, muss aber ggf. durch sichere Field-Gateways gekapselt oder mit neuer Technik erweitert werden. Sichere Kommunikationsprotokolle müssen eingesetzt werden. Diese gibt es mittlerweile für fast jedes Szenario in der IoT. Hier ein Ausflug: AMQP(S) für schlanke QoS-orientierte Ansätze, MQTT(S) für so ziemlich jede Hardware, http(S) für starke Geräte und vielleicht noch CoAP/UDP(TLS) für absolute LowEnd Hardware.
Weiterhin müssen Unternehmen umdenken. Es gilt aus der Vergangenheit zu lernen und vielleicht doch den einen oder anderen Cent in die “Feldforschung” zu stecken, Mitarbeiter zu schulen und adäquate Hardware mit neuen Security-Chips auf den Markt zu geben. Leider führt die schier unendliche Vielfalt an Möglichkeiten dazu, dass eine Auswahl der richtigen Technik nur bedingt oder mit hohen Aufwänden verbunden ist. Hier muss sich auch noch die richtige Richtung zeigen, die meist durch Konsortien und damit verbundenen Standards einhergeht.

Ein Beispiel dafür gibt die Strategie der Bundesregierung mit dem Namen Industrie 4.0 (siehe Wikipedia) vor. Denn hier bestehen für Unternehmen berechtigte Eigeninteressen, ihr IoT-System sicherer zu gestalten. Mit der OPC Task Force (heute OPC Foundation), die aus Siemens, Intellution und Fisher-Rosemount zusammensetzte, wurde ein universelles/herstellerunabhängiges Kommunikationsprotokoll/-System geschaffen, das heute industrieller Standard ist. Mit OPC UA wurden nun auch viele der Sicherheitslücken geschlossen.

Leider kann es derzeit keine ultimative Lösung geben, die sofort greift, um das derzeit hohe Risiko durch stetig wachsende Sicherheitslücken zu mildern.
Letztlich bleibt nur fest zu stellen, dass die “IoT-Generation” wohl, genauso wie die “Internet”-Generation damals, erst einmal erwachsen werden muss, um die gerade vorherrschenden Kinderkrankheiten zu bewältigen.

How to change hosts entries on network changes


I am changing some special hosts file entries according to different networks, I am connecting to.
One is with Direct Access (DA) into my companies’ network, that works with IPv4 address resolution. The other one is directly connecting to the office’s network, using IPv6.
So I have to change my entries by hand, to reflect address resolution, every time, when I am changing between these two networks… and that is really annoying.

After a couple of years (right…! doing this day by day with the goal, finding a way out next day (every day), I passed over years ) and many, difficult to track, issues, that could easy by solved by not forgetting to change theses entries in hosts file, I tried to find a solution….

…and here it is!

It is that simple, that I shouldn’t post it here, to save my face 🙂. But I think, I am not alone with that.

Windows Tasks Scheduler is the key. So, let me explain:

Think of a simple hosts file entry:


(Currently I am connected by DA over IPv4, therefore I uncomment the second line and commented out the first one.)

For automatically changing these settings to the invert, I can (and of course, there are other possible solution) create a Tasks, that runs on detected NetworkProfile change. With a little powershell script the right settings in the hosts file will be modified. So have an eye on the following instuction:

  1. Open Windows Tasks-Scheduler
  2. Create new Task by right clicking somewhere in Task Schedulers tree
  3. In next dialog enter a Name for the task
    1. (it’s up to you, to decide, whether to choose running with logged on user or not)
    2. Making changes to hosts file is only with administrative privileges possible, so click “Run with highest privileges
    3. Also set the configuration in respect to the running machine
  4. Switch to tab “Triggers
    1. Select at “Begin the task:On an event
    2. Search for Microsoft-Windows-NetworkProfile/Operational at “Log:
    3. On “Source:” select NetworkProfile
    4. As “Event-ID:” enter 1000 (means “Network changed”)
  5. OK… coming slowly to an End…
    1. Switch one tab further to “Actions
    2. Leave “Start a program” in “Action:
    3. Now we want a Powershell script to be triggered on the networkchanged-event
    4. So… add as “Program/Script” name PowerShell.exe
    5. As “Add arguments (optional):” add script’s path
  6. If you are not willing to spend the time into selecting the right event, but want to achieve the same result here is a simpler solution
    1. Switch to tab “Conditions
    2. Select last checkbox and choose the network that triggers
  7. Save everything an go to next step….

The triggered powershell script

Now, everything is ready to run something an the networkchange event. So here come the script (I do not have to mention, that there are more elegant ways):

(again my example hosts file)

In the first line the script retrieves the network profile name (the name, that is also listed in that combobox in picture 6.b.). In relation to the name of the current netw.profile, all lines in the hosts file, that contain the string “wlan” will be uncommented and others with “home” commented.

The result after network switch is :

Making ActiveMQs Topics virtual for use with Queues

The problem

Before I start explaining the core part, I will tell you the reason, why I am posting this.

or directly to the solution

Everything started with RabbitMQ 😉 in a major enterprise IoT-project for a well-known german refridgerator producer, that has been mentioned by Satya Nadella at the Hannover-Messe in Germany in 2016.
The goal was, to bring smart devices online with a common messaging protocol. But because of some issues with the broker (handling some sort of certificates), the decision has been made, to replace RabbitMQ by ActiveMQ.
The problem (technically) with that decision was, that all sinks of messages send to the broker with MQTT, will be handled by ActiveMQ as Topics. This leads the whole communication architecture to be not scalable.

Please let me explain some details regarding Scalability and Topics vs. Queues…

If you like to have an IoT-Architecture, that is scalable, than you should enable your environment to handle as much messages as possible in parallel. To achieve this, you have to decouple logics/functions into Worker-Units. With this, you can higher the number of parallel worker, that can then consume messages parallel. As you can see… this is scaling! But, what is very important, to point to, is that these workers can only consume messages parallel, if the source of the messages (where worker consume messages from) delivers messages with FIFO pattern (or one in one out). This also known as queuing concept.

are constructs, that allows consumer, to handle only one message at a time. With that you have a concurrency enabled environment, where all consumers are “battling” for messages. So with this pattern, you can keep queues “empty”. And, if there is more load at the system, then you can scale by awaking some more instances of the same worker.


The other, let’s call it “message handling strategy”, is Topics. The idea behind this is, to have multiple different worker consume on topic-related messages. Think of the following scenario, you have a smart device, sending notification, alarms and other emergent messages; But also it sends some logging/monitoring messages. Further you like to have some backend modules, that handle these different types of messages in a different manner, then you need Topics. A message would then be routed to either a topic called “Alerts” or “loggings” or whatever…. With that strategy you deliver one message to all the consuming workers listening on the same topic.

The problem with change of broker

Maybe, you can see the main problem with the broker-change… after changing from RabbitMQ, that handled all the messages send by MQTT protocol in queues, the concept changed to topics, because ActiveMQ handles MQTT messages with topics-Strategy.

[Quote: “ActiveMQ is a JMS broker in its core, so there needs to be some mapping between MQTT subscriptions and JMS semantics. Subscriptions with QoS=0 (At Most Once) are directly mapped to plain JMS non-persistent topics. For reliable messaging, QoS=1 and QoS=2, by default subscriptions are transformed to JMS durable topic subscribers. This behaviour is desired in most scenarios. For some use cases, it is useful to map these subscriptions to virtual topics. Virtual topics provide a better scalability and are generally better solution if you want to use you MQTT subscribers over network of brokers.“, https://activemq.apache.org/mqtt.html ]

And that makes our architecture become inefficient/ not scalable. One possibility to get out of this situation, is to change that smart device to send by AMQP (for example), but this would mean, to change the whole project plan and with that time to market. Or you make some “magic” configuration stuff at brokers side J… as a second solution.

The solution

How to change Topics to virtual topics

As a developer in the eco-system of Microsoft (Visual Studio, C#, .Net,….) it is not so trivial, to get into this Broker and understand the different concepts, “languages” and further “strange things” that are used with ActiveMQ. Also the documentation of that broker is not that detailed and easy to understand, if you only want to use this product. (but that is understandable, because no one should use it as an OOB-Tool). But I think this solution here could be found useful for other devs, like me, that are only looking for a solution to get around the problem, I explained before.

First: Config

Somewhere in the install folder of ActiveMQ…

There is a “conf” folder, containing the activemq.xml. There you can find your allowed and configured connections in the section transportConnectors.

In the excerpt below, you can see the line, where mqtt protocol is enabled. This line gets extended by ActiveMQ parameter for subscription strategy (see below). https://activemq.apache.org/mqtt.html

If you haven’t already, than you should fix you code as well, to read from queues:

  • Create consumer for Queues instead for topics and pass the new path-pattern…
  • With virtual Topics the name of the queue is changed to a pattern as follows. Consumer.Application.VirtualTopic.QueueName.
    A example… If you send a message to topic Alert by the pattern smartDevice.number.Alert this will change to (from Broker or consumer perspective) VirtualTopic.smartDevice.number.Alert
    That is then consumable by that path(e.g.): Consumer.AlertWorker.VirtualTopic.smartDevice.number.Alert (means, I have a worker (or multiple instances) handling alert messages from the queue “Alert” of a specific device).
    You can also use wildcard characters like * e.g. Consumer.AlertWorker.VirtualTopic.smartDevice.*.Alert (means, I have a worker (or multiple instances) handling alert messages from the queue “Alert” of all devices).
    .. and so on.

I hope someone can make use of this solution, otherwise it may lead to another. Please share your thoughts!

Making event handling scalable

Today I am working on an IoT enterprise project. And as all of us know, there are thousands/ millions of messages arriving at the message broker and sitting there for being consumed by decoupled worker, computing and managing all the different types of messages.

The special thing, I am currently working on, is a kind of configuration worker, that is generated automatically by a “main worker”, that pre filters massages and so on – I will not discuss the architecture in that article (maybe in a later post). This post is not about IoT infrastructure,where goal is to make parallelism and scalability possible for device-to-cloud-communication. It is about the same achievement, but for events in a software component itself.

The Pipeline-Pattern is the key

Consider following common scenario: you’ re trying to handle multiple messages in a software unit arriving as events on a handler. The unit reacts with some heavy computational algorithms or long running ops on services, files,… . A traditional way would be, to hook up a handler, that calls some methods handling the content of the arriving message. Because of the nature of event handler, there are multiple handlings on multiple messages possible. So why changing the pattern?

I think common IoT scenarios are showing, why. They are not that different to events and handlers on your application. The problem is ‘blocked ressources’! If there are handler, that are running long, then it is not guaranteed to process other handlers in parallel. Another thing is also scalability of processing. But how can Piplines help out?

Consider now following solution: an event/message arrives and will be stored into a queue. At the other end of that queue, a receiver can consume the message, when there is time left for working on messages. A queue is by nature a system, that decouples. Also a sender can put into a queue as along that queue has enough space left. So, with that concept, an event handler is able to receive events and also has not to care about anything else, but its main objective – handling events.

Give me code!

Here is an example, of what I mean (it’s written in C# ).

Old way:

Way with Pipeline-Pattern:




Update für OneDrive-Client (Win10 )

Der OneDrive – Client ​hat kürzlich ein Update erhalten. Danach kann man nun endlich ein zusätzliches Geschäftskonto lokal synchronisieren. Das hat den Vorteil, man bekommt Alles an eine Stelle und kann es über eine App steuern.

… und so geht’s …

  • Öffnet das Menü des OneDrive-Clients mit einem Rechtsklick auf die kleine Wolke in der Taskleiste und anschließend “Einstellungen
    • Dann auf den Reiter “Konto” und unten (wie im Bild) auf “Geschäftskonto hinzufügen” klicken

    • Dann meldet euch mit eurem Geschäfts-Account an (ggf. passt ihr noch den lokalen OneDrive Ordner an)
    • Im Anschluss könnt ihr die zu synchronisierenden Ordner auswählen.

    • Wenn Ihr dann den nachfolgenden Screen erreicht habt, dann ist alles eingerichtet. (Bitte nicht vergessen: abhängig von der Masse an Daten kann die Synchronisation etwas dauern und ggf. euer System etwas “stören”)