Returning documents as one file back from Azure CosmosDB – Part II

In my previous post, I showed, how to get all documents of one collection into one single JSON-file. The best reason to do it this way, is you can use a browser, being independent from the Hardware. But that has a drawback in some situation. If your documents content is to big/large, than the Data Explorer cannot concatenate all together, wherefore it results in paging. ..Not what we want, bad!

There is another option on getting all documents from CosmosDB into one JSON file. The “Data migration tool” for Azure CosmosDB. It’s pretty easy to install and use. Just hold your connection string ready. I will not duplicate content, that is covers more details on, how to use and what to fill in, to get the right output. So, take this post a hint to this tool. I used it very often for various tasks, why I like to recommend, to have a look at this. Visit here

If you have experiences with Data migration tool, let me know and leave a comment.

Returning documents as one file back from Azure CosmosDB

If you ever tried getting multiple documents from CosmosDB collection as on “file”, than you should do the following steps. There can be different requirements for this, but this is not scope of this blog.

Option I – directly in Azure Portal

(Option II – using Datamigration tool)

Here is, what you can do:

    1. First open up “Data Explorer” in CosmosDB Resource (Azure Portal) and select DB->Collection of your interest.
    2. Click “New SQL Query” -> this opens a new query pane with a select statement and also “Setting” to open up the settings SlideIn.

      new query pane
      new query pane
    3. In that SlideIn you have several options.
      – one is to set a custom value on how many results can be displayed in one page
      – another option is to remove the limitation of default 100 to unlimited

      unlimited page results
      unlimited page results
      custom page limit
      custom page limit

       

      ! Don’t forget to save !

    4. Next, go to query pane and leave the select statement, if you have no other conditions on the outcome. Click “Execute Query”. With this you can now grab the output in the results pane by CTRL+A and CTRL+C

      execute query
      execute query
    5. Open your favorit text editor (I like VScode very much), paste in the clipboard and save as a file.

Here you go! 🙂 Hope this helps. If you have other suggestions, you’re welcome to add comments.

Accessing VSTS REST API – Get Release definition

As of the need to automate some tasks in VSTS, I was looking for a solution to access VSTS without any UI. To get this done,
I first used the VSTS CLI [
CLI Link] to do things right. But unfortunately CLI is not “complete” (missing e.g. Release mgt) and also I cannot install
VSTS CLI everywhere. With that in mind, I decided to use the full REST api [
Documentation], to get things done. Here are the steps to take, to get at least one release definition by use of REST
calls to VSTS:

Create a PAT (Personal Access Token) in VSTS, to gain Access…

  1. …by clicking on your profile Icon and choosing “Security”
  2. Next, click “Add” to generate a token
  3. Fill in the Name of your PAT, expiration days and so on. Under “Authorized Scopes” you should select “Selected Scopes”
    and check all relevant scopes, that you like to authorize.

With that, open Postman (or another tool, that you like or use)

what we will do here is first getting a list of all projects along with their ID

  1. Add the calling REST URI of following Format:
  2. select Authorization tab and type “Basic Auth”
  3. leave the “User” field empty
  4. add the new created PAT into “Password” field
  5. Make sure you have a GET Operation and click “Send” -> below in Postman-Window the resulting response of the REST
    call will be displayed.

the release definitions of a project, which we can use for further documentational Tasks

      1. Now, for getting the list of Release definitions, you have to read carefully the documentation on
        Visual Studio Doc . There is a documentation lag! There is the Definition of REST Api for retrieving
        the definitions as follows:
        GET https://{instance}/{project}/_apis/release/definitions?api-version={version}
        But it should be as in the examples (like so):
        GEThttps://fabfiber-inc.vsrm.visualstudio.com/MyFirstProject/_apis/Release/definitions
        notice the red vsrm
    1. with that, create a new REST call as a GET Operation by typeing URL in that Format:

(you can also append the api version)

  1. With Authorization set, as before, you can send the call and get back a list of Release defionition in JSON Format
  2. For retrieving a single one with details, you have to pass a specific ID, from the list before, that you can pass as a parameter to the REST Api

    ! as a  hint, don’t use any GUIDs. 🙂

That’s all! Now, you can analyse that resulting JSON or use it for further documentation. I will soon post an article, where I reuse this JSON to generate a readme.md for documentational reasons …

Getting Mono running on Intel Edisons Yocto Linux

Intro

 

Hi there,

after Intel rejected Edison and Co. I was not sure, what to do with the neat piece of hardware, I am owning. Despite of not getting any Support and updates and so forth it is still Hardware, that has WiFi, Bluetooth, SD Card,… onboard. So it has enough capabilities to build cool things. So I decided, not to throw away 😉 and see, what I can do with it from another perspective.

With that in mind, I asked myself, if it isn’t possible, to get my favorite dev setup running on Intel Edison: .Net Framework with C#

For half a year, I gave it already a try, but it was a little bit tricky, by downloading the whole package and compiling it on the Edison (Install Mono by hand [“David’s Random Projects and Documents Web Page”]). In my case, there were problems with storage and compile errors. But then, there was a package update for use with opkg, so that I was able, to get Mono Environment installed and usable. Read here, how things work…

Install Mono

Please make sure, that you added these unofficial packages to your /etc/opkg/base-feeds.conf

src/gz all http://repo.opkg.net/edison/repo/all
src/gz edison http://repo.opkg.net/edison/repo/edison
src/gz core2-32
http://repo.opkg.net/edison/repo/core2-32

After that, type

(if necessary – it upgrades all packages, it can eat up your free space ) in to your bash.

 

Having that done, only
opkg install mono
is needed and everything is ready to use (takes a moment, to download and configure).

Testing Mono installation

You should have a look to your mono version, that is now installed.

The output should display something similar to this, where Mono compiler version is 4.2.2. that maps to .NetFramework 4.6 (I thought, please correct me, if I am wrong).

Writing code

After that, you could write your first test app in C#

Create a test folder (like in screenshot above)

(typing ‚I‘ for Insert | for Save: ESC ‚:‘ after that ‚x‘)

For compiling your first App, type mcs tests.cs this builds the code to tests.exe
ls -lh  shows up tests.exe in your Folder. Running your app is easy, just type
mono tests.exe  and this is the result:

Adding Hardware access (GPIO,…)

But writing only C# Apps on Intel Edison is not that, what the device is made for, so I needed some access to the underlying hardware. At first I tried to access the mraa libs from Intel by P/Invoke them, but thankfully that guy here Mayuki Sawatari wrote an assembly, that has everything in it (ありがとうございます, すごいです。).

I tried to get everything compiled at my Edison under Yocto Linux, but that was not possible, therefore I cloned it to my Windows machine and opened the solution file with Visual Studio 2017. The build was successful and I could copy over the resulting Dll to my home directory on Edison, where I now have this available for further development.

git clone https://github.com/mayuki/MraaSharp.git
Again, create a file (e.g. pinTest.cs) and copy this code here (It’s slightly modified version from what you can find inside that Git Repository – I adapted it to my Intel Edison Arduino Breakout Board):

 

 

This Code is a working Blink example, which blinks onboard LED every second.

 

Getting from CSV to JSON is pretty easy

This is a short reminder for myself, but could be helpfull for someone else out there. Nothing new!

I often look for easy solution, that resolve “easy” problems. This is one: “Convert something to something”.

After looking arround in the outer spheres of Internet, I realize, that OoB tools can do the job better or at least without any dependencies or such things.

So, how to convert a regular CSV to JSON? Take PowerShell!

It’s easy, isnt’it 🙂

Deploy application into Azure Service Fabric with VSTS and AAD

This article is about enabling Service Fabric Cluster (SFC) in Azure for use with AAD (Azure Active Directory) authentication.

My Setup is as follows

I used VSTS (Visual Studio Team Services), where I built up a release, that cares for deployment of SFC. So to get everything working, you first need a Cluster Endpoint configuration, that allows VSTS to deploy an application into SFC.
To get this right, you can choose from two main possibilities: Certificate-Auth or AAD-Auth
If you like to choose Certificate Auth., than you should read this article here: Deploy Azure Service Fabric Application with VSTS (it’s written by Mike Kaufmann a friend of mine and a MVP for ALM/DevOps).

If you like to choose AAD-Auth, then this is, what you are looking for….

First you have to grab a Powershell-Script, that creates some App registration for you (you could do this by hand, but for being consistent the script is the better choice) [Create a Service Fabric cluster by using Azure Resource Manager (Microsoft Docs) – paragraph  “Set up Azure Active Directory for client authentication”] or simply click this here Download Script, but read this article, to get everything, you should know here.

This generates two App-regs:
ttservicescluster_Cluster and ttservicescluster_client.

 

By the way, it is important, to grab the output of the script, because you need the GUIDs, to setup your cluster access with these J

Now, you have to assign user to the corresponding App

First go to AAD and look for the “Cluster” App registration.

Then, go to that app (Yes there is also another way, to go there…. By using “Enterprise Application”-Menu in AAD)

After opening the App, you can add users or groups (in my case, I added a user)

At least, you have to set the needed right/role, for accessing the SFC (Admin is the right choiceJ )

Having that done, we can concentrate on setting up Cluster Endpoint for Application deployment over VSTS

For doing this, you have to open VSTS Service Tab

… and click dropdown “New Service Endpoint” for creating a cluster endpoint

Fill out as below in the picture and click OK

Now, you are ready, to deploy apps to your cluster.

If this was helpfull, or lacks from Details, please let me know.

The client ‘{0}’ with object id ‘{1}’ does not have authorization to perform action ‘Microsoft.ServiceFabric/register/action’ over scope ‘/subscriptions/{2}’

For an enterprise customer, I hat do develop a solution, that is build in the Cloud (Microsoft’s Cloud Azure). In that project I had the following setup:

For Build & Release, VSTS (Visual Studio Team Services) is used. For deploying bits to Azure I built up a release, that should setup a basic architecture in Azure.
For accessing Azure from VSTS, an IT responsible of that company, created a Service Principal (SP), that can access Azure resources and added that guy as VSTS Endpoint Service.

Now, one of those architecture components is Service Fabric. After creating the Release definition and the scripts in Azure CLI 2.0 I tried to get things working. But unfortunately, the release stopped with following error message:

… ok, maybe I have to register the namespace manually (usually not, but how really knows 😉 ), so I used the following command, before creating service fabric cluster:

and this led to following error:

The client ‘{0}’ with object id ‘{1}’ does not have authorization to perform action ‘Microsoft.ServiceFabric/register/action’ over scope ‘/subscriptions/{2}’

Hm…, was not, what I hoped to get, but expected K ! Are there any account problems? Using a foreign subscription with limited access could be the cause! So I did some investigation on how the SP was created, set up and assigned to VSTS.

And, yeah, this was the right track. It became apparent that the SP was created only in AAD with sufficient rights, but it was not assigned as subscription-user, with contribute rights. After proper configuration, everything worked like a charm.

Hope this is also a solution for you?!

I am MVP – thank you Micrososft

Yesterday I’ve got awarded by Microsoft as MVP (Most valuable professional). As a matter of fact I am really happy, glad and also proud, to be one of a group of nearly 4000 MVPs around the whole world, that driving technology experience in the community and sharing knowledge.

I  hope I will sastify this award also in future. With that I want to say:

Thanks to all that supported my way so far. Especially, Michael Kaufmann (MVP ALM) & Benjamin Abt (MVP ASP.Net) & Jan Schenk (Microsoft)

Getting started with C# on Intel Edison (Yocto Linux)

Intro

Hi there,

after Intel rejected Edison and Co. I was not sure, what to do with the neat piece of hardware, I am owning. Despite of not getting any Support and updates and so forth it is still Hardware, that has WiFi, Bluetooth, SD Card,… onboard. So it has enough capabilities to build cool things. So I decided, not to throw away 😉 and see, what I can do with it from another perspective.

With that in mind, I asked myself, if it isn’t possible, to get my favorite dev setup running on Intel Edison: .Net Framework with C#

For half a year, I gave it already a try, but it was a little bit tricky, by downloading the whole package and compiling it on the Edison (Install Mono by hand [“David’s Random Projects and Documents Web Page”]). In my case, there were problems with storage and compile errors. But then, there was a package update for use with opkg, so that I was able, to get Mono Environment installed and usable. Read here, how things work…

Install Mono

Please make sure, that you added these unofficial packages to your
/etc/opkg/base-feeds.conf

src/gz all http://repo.opkg.net/edison/repo/all
src/gz edison http://repo.opkg.net/edison/repo/edison
src/gz core2-32 http://repo.opkg.net/edison/repo/core2-32

After that, type

(it upgrades all packages, it can eat up your free space!)
in to your bash.

Having that done, only

is needed and everything is ready to use (takes a moment, to download and configure).

 

Testing Mono installation

You should have a look to your mono version, that is now installed.

The output should display something similar to this, where Mono compiler version is 4.2.2. that maps to .NetFramework 4.6 (I thought, please correct me, if I am wrong).

Writing code

After that, you could write your first test app in C#

Create a test folder (like in screenshot above)

 (typing ‚I’ for Insert | for Save: ESC ‚:’ after that ‚x’)


For compiling your first App, type:

This builds the code to tests.exe

shows up tests.exe in your folder

running your app is easy; just type:

and this is the result:

Adding Hardware access (GPIO,…)

But writing C# Apps only on Intel Edison is not that, what the device is made for, so I needed some access to the underlying hardware. At first I tried to access the mraa libs from Intel by P/Invoke, but thankfully that guy here Mayuki Sawatari wrote an assembly, that has everything in it (ありがとうございます, すごいです。).

I tried to get everything compiled at my Edison under Yocto Linux, but that was not possible, therefore I cloned it to my Windows machine and opened the solution file with Visual Studio 2017. The build was successful and I could copy over the resulting Dll to my home directory on Edison, where I now have this available for further development.

Again, create a file (e.g. pinTest.cs) and copy this code here (It’s slightly modified version from what you can find inside that Git Repository – I adapted it to my Intel Edison Arduino Breakout Board):

This Code is a working Blink example, which blinks onboard LED every second.

Finally

Although Intel Edison is discontinued, it is working and with fresh development tools, it can be sweet to hack some usefull things with it.

The upside with this is, you Can also Code on your Windows machine and test your Code by mocking mraa before releasing it to the device. Welcome to easy DevOps 😉

So, if you own one and like to give this here a try, leave a comment or share your Projects.

 

cheers

How to bootstrap an ESP8266 with Azure Services

One of the things I played around with ESP8266 and Azure IoTHub was, how I can get the whole infrastructure deployable and also to get the code working for other devs, without sharing my Azure environment and credentials.

The main problem was, to keep all modules decoupled from each other, so that the IoT device (here my ESP8266) can reach my Azure Endpoints all the time, either endpoints have changed by redeployments or new devices are added.

So I started developing the following architecture:

Bootstrap architecture
Bootstrap architecture

As you can see, the device first tries by connecting over WiFi to reach the Azure backend, that is a function. That functions responsibility is to create a device identity. If it does not exists, it will be created and then the function sends back the device’s identity together with a new endpoint. That endpoint directs to a storage account containing the up to date firmware as a blob.

So, on receiving the identity and the storage endpoint, the device can now connect to the storage, downloads the firmware and starts flashing. After the flash process is done, the device tries to connect to Azure IoT Hub. If connection has been successful established, it starts sending telemetry data (here it is temperature and fake battery level) to IoT Hub.

When now a new firmware is ready for flash on productive devices, an administrator or so is able, to send an update command, with what all connected devices can set their self to firmware update mode and start downloading/ flashing process. That’s all!

With this approach my devices are decoupled from the backend. The only one thing I need is a little piece of code, that enables my device to find the first endpoint. But with that, I can start deleting my Azure resource group and redeploy it, as long I have fun doing it. And fortunately, I can use this, to also share my code and deployment scripts, without sharing any secrets 🙂

This is, what DevOps is for. Making life easier and safer. If you like to, take part on my project and contribute. This version of code and deployment is a draft. There is a lot of things to do, to get this smooth and fluent. So, everyone is welcome to adjust and optimize the code and get things right. https://github.com/totosan/DevOpsIoT