Microsoft Build Conference – 2018 – Part 3(Microsoft Graph, Azure Databricks, Azure Blockchain, GIT patterns/antipatterns )

June 11, 2018

This blog will cover the Microsoft Graph and Azure related topics as well as Git patterns and anti patterns …

Microsoft graph

graph

Graph_uses

graph-details

Azure blockchain

Mark Rusinovich’s talks if you haven’t been at Build or other conferences are always house full. This time was no different. He is a pervasive speaker who can really simplify complex topics. This year Mark created a fake crypto currency using Azure Block chain for his demo. It did help me clear some of my concepts…

•      Introduced last year
•      Azure blockchains or Distributed Ledger Technologies (DLT) using Smart contracts simplifies transactions
•      Sample on github
•      https://github.com/Azure/azure-blockchain-projects

 

azure_blockchain_details

azureblockchain-today

 

downarror

azureblockchaointomorrow

 

Azure Databricks – Apache Spark-based analytics platform optimized for Azure

 

This is a great platform if you need help with data wrangling of big data. It is a first party service unlike on other platforms…

azuredatabricks2

azuredatabroicks1

Git patterns and anti-patterns for successful developers

I enjoyed this talk as it was more of a day to day use kind of talk …

Trunk Based development – Key idea is to Code closer to master and make small simple changes.

Make new branches and get changes in merged into the master quickly rather than monolithic changes.. VS has the ability to guide to the trunk based development workflow…

git1

git2

git3

git4

Merge back to master

git5

GitHub flow – This is a trunk based development but also has an additional level of  complexity …

githubflow

git6

The GitHub flows is great for continuous deployment but doesn’t scale vey well

VSTS Release flow – Always brings the change to master before going to production unlike GitHub…

More here: Http://aka.ms/releaseflow

vsts_releaseflow.JPG

References

•      Git patterns and anti-patterns for successful developers

 

 

 

 

 

Advertisements

Microsoft Build Conference – 2018 – Part 2(IoT)

June 11, 2018

 

Iot

Continuing from Build where we looked at AI earlier.. The next big areas was IoT. Microsoft  is planning to spend $5 Billion on IoT over the next few years. The big announcement at Build was to make IoT edge Open source  and big focus on   Intelligent Edge  and IoT security.

Lot of  content was spent on Intelligent IoT Edge devices.  Some of the hardware which was showcased is below.

AI enabled_edge devices

 

Roobo speech device in detail – Complete AI system solutions for IoT devices – household  electric appliances, automobiles, robots, toys, and other industries.  These  kits are Farfield with Custom keyword spotting in linear and circular configurations..

roobo_device

spechdevice_solution

 

 

Azure Sphere   azureSpher1

This project Leverages Microsoft expertise in Security, Cloud to create a Secure connected MCU especially in this day and age where IoT security is a key issue.

Key points about Azure Sphere

  • Build 2018 – Azure Sphere – Silicon, Software and Cloud
  • Azure Sphere is a First device of its kind with Security at its heart
  • Like IoT core Azure Sphere has 10 year support from Microsoft
  • Azure is open to any MCU manufacturer who want to use the Pluton security system ROYALTY FREE who can also innovate with the GPL’d open source Linux  and also to any cloud provider.
  • DevKits available in Summer 2018

The picture below describes  properties of a highly secured connected device which is the goal of this device.\azureSpher2

 

Hardware  is essential for establishing the root of trust. This allows hardware to protect device  identity  and software integrity.

azure_sphere_root_oftrust

Software helps to configure dynamic compartments which limit the reach of any single failure.

azure_sphere_barriers

Nature of software and security that there will be bugs and exploits over time. By having renewable security cloud is leverage to provide updates and software applies updates and hardware prevents rollback. This makes the solution robust.

azure_sphere_updates

azure_spehere_mcu_powered_devices

Implementation details

Each hardware unit is segregated  off from the other by a firewall preventing cross polluting incase one of the components gets compromised. The solution does use Cortex M (typically used in MCU’s) and Cortex A which is used for application and high  compute environments. This however can impact battery life in environments where battery power is the only source. And this is the reason why some folks called is an Application processor rather than an MCU.

azuure_sphere_architecture1

Lots of silicon vendors are implementing this solution especially Mediatek. There is interest from appliance makers like Wolf/Sub zero and Leoni…

azure_sphere_arch2

azure_sphere_security_service

It can be also used as frontend gateway to another MCU IoT device . 

Next time I will cover Microsoft 365/Graph, GIT patterns and anti patterns,  Azure Databricks and Azure Blockchain.

References

 

 

Microsoft Build Conference – 2018 – Part 1(AI)

June 7, 2018

Why did the Javascript developer need glasses? Because he didn’t  C# “ . Yes it is Build Time folks :)…

The Build conference  was held at Washington State Convention center was the home to the conference. Same as last year… It featured approximately 405+ sessions (Keynotes/deep dives/20 minute introductions sessions etc.) all packed in 3 days.

This year one of the things I was amazed by at Build was the healthy assortment of food and snacks. Very little high sugar/carb.  items and more veggies, fruits etc.

Also more healthier ambience in general like Performers -with a good mix of rock, new age, meditative, contemporary dance, and very unique….

Bummer L  I  missed out on the celebration especially the Silent disco.

Conference overview 

The theme of build was around   AI (intelligent apps), Azure IoT Edge, Windows 10 on ARM, Desktop apps modernization, Migrate apps. to cloud, Microsoft Graph and Teams …

Novelty items were Azure Sphere, Project brainwave, OpenAI framework (ONNX) support, ML.Net, Azure Databricks.

From the Keynotes Joe Belfiore mentioned that Microsoft wants developers to use data and build Intelligent apps and not associate themselves as .Net, Cloud Developers,… but consider themselves Microsoft 365 developers.

It was impressive to see  Satya Nadella was the only CEO who addressed 3 key pillars of responsibilities from  platform company. Google and FB didn’t address this in their own developer conferences!

For me one the words of wisdom in the Vision  keynote which suck a chord was from Satya Nadella —  “The time has come to ask ourselves not what computers can do  but what computers should do”…

AI for everyone … 

In this blog I cover primarily AI announcements and topics.

It was very clear that Intelligent apps are here to stay and will be democratized with Microsoft’s platform push for AI. This was evident as there were around 57 sessions dedicated to AI covering variety of topics:

  • New framework ML .Net, Open AI framework – ONNX
  • Ml.net is open source
  • Azure Conversational AI (a.k.a bots)-100+ new features,
  • Smart Ink, Speech
  • Video AI – VideoIndexer,
  • Azure ML Vision,
  • AI for Security & AI for Accessibility,
  • Ethics

 

One of the Key use cases of AI is from customization to personalization…

Microsoft showcased their AI success across public AI benchmarks

New framework ML .Net, Open AI framework – ONNX

Microsoft is working with Facebook and other companies to create  open AI framework which allows to make use of multiple AI model architectures like CNTK, Tensorflow etc. and allow it to run on multiple hardware GPU’s/ FPGA’s `etc. This allows to decouple models from Hardware by proving a middle layer…

 

Ml.net ( open source)

This is great for all the .Net developers who don’t need to invest in another programming language. But this is more than a programming language. It is a framework.

https://github.com/dotnet/machinelearning

Keras Framework –

Windows ML on Intel Platform

Intel is adding support for Machine Learning into the CPU, GPU and other Hardware architectures like FPGA’s and VPU’s.

Conversational AI a.k.a Bots

There have been more than a 100 features ad to the Bot framework.

 

Drawing Bots

At the core of Microsoft’s drawing bot is a technology known as a Generative Adversarial Network, or GAN. The network consists of two machine learning models, one that generates images from text descriptions and another, known as a discriminator, that uses text descriptions to judge the authenticity of generated images. The generator attempts to get fake pictures past the discriminator; the discriminator never wants to be fooled. Working together, the discriminator pushes the generator toward perfection.

Lessons learned from chatbot testing

Speech  – Unified speech SDK 

Speech services  and Cortana

Cortana Skills – Looks like the list is growing …

Video Indexer (inference 🙂 ) – Lots of great AI features

Intelligent ink – Allows cloud API service to decipher advanced ink. Windows already has built-in recognition of Ink and is already good at recognizing diagrams and free form handwriting. Intelligent Ink uses cloud APIs for ink Analysis.

sophisticated Ink analysis to find more semantic structure in writing

Finally the following slide explains why a cloud service for Ink rather than use the default local  Ink API

Microsoft office especially Powerpoint is leveraging this:

Ink to Shape

Transforms the picture below to come up with a nicer circle

|

V

Ink to text – It can also identify the text within a shape and convert that to a font.

Mural.co is a company doing much work  in this space

information on the session – https://channel9.msdn.com/events/Build/2018/BRK2430?term=ink

Azure ML computer vision – Toshiba Camera

Toshiba showcased their computer vision device which allows for detecting problems using Azure ML computer vision package.

Next time I will cover IoT – Azure IoT Edge, azure Sphere etc.

References

 

     All Build sessions

     https://mybuild.microsoft.com/sessions

Windows 10 on ARM is looking good

May 26, 2018

whywin10

References:

https://channel9.msdn.com/events/Build/2018/BRK2438?term=windows%20on%20ARM%20

  • WoW64

https://msdn.microsoft.com/en-us/library/windows/desktop/aa384274(v=vs.85).aspx

Transfer Learning sample from Microsoft AI framework – CNTK

March 27, 2018

nsfer learning is really a great way to save on resources by transfer of learning from an existing model. WMore info here:

https://www.cntk.ai/pythondocs/CNTK_301_Image_Recognition_with_Deep_Transfer_Learning.html

Transfer Learning is a useful technique when, for instance, you know you need to classify incoming images into different categories, but you do not have enough data to train a Deep Neural Network (DNN) from scratch. Training DNNs takes a lot of data, all of it labeled, and often you will not have that kind of data on hand. If your problem is similar to one for which a network has already been trained, though, you can use Transfer Learning to modify that network to your problem with a fraction of the labeled images (we are talking tens instead of thousands).

What is Transfer Learning?

With Transfer Learning, we use an existing trained model and adapt it to our own problem. We are essentially building upon the features and concepts that were learned during the training of the base model. With a Convolutional DNN (ResNet_18 in this case), we are using the features learned from ImageNet data and cutting off the final classification layer, replacing it with a new dense layer that will predict the class labels of our new domain.

The input to the old and the new prediction layer is the same, we simply reuse the trained features. Then we train this modified network, either only the new weights of the new prediction layer or all weights of the entire network.

This can be used, for instance, when we have a small set of images that are in a similar domain to an existing trained model. Training a Deep Neural Network from scratch requires tens of thousands of images, but training one that has already learned features in the domain you are adapting it to requires far fewer.

In our case, this means adapting a network trained on ImageNet images (dogs, cats, birds, etc.) to flowers, or sheep/wolves. However, Transfer Learning has also been successfully used to adapt existing neural models for translation, speech synthesis, and many other domains – it is a convenient way to bootstrap your learning process.

Here is an example:

 

 

CNTK contains a transfer learning sample. I have a machine with Nvidia GPU GeForce GTX 960M

Here is what Happens when I start this sample with python:

Activate CNTK

C:\CNTK-Samples-2-4\Examples\Image\TransferLearning>conda activate cntk-py36

(cntk-py36) C:\CNTK-Samples-2-4\Examples\Image\TransferLearning>

Error 

(cntk-py36) C:\CNTK-Samples-2-4\Examples\Image\TransferLearning>python TransferLearning.py
Traceback (most recent call last):
File “TransferLearning.py”, line 9, in <module>
import cntk as C
ModuleNotFoundError: No module named ‘cntk’

 

This gets resolved by picking python from the anaconda.

 

cntk-py36) C:\Users\Dell\Downloads\ethereum-mining-windows>where python
C:\Users\Dell\Anaconda3\envs\cntk-py36\python.exe
C:\Users\Dell\Anaconda3\python.exe
C:\Users\Dell\AppData\Local\Programs\Python\Python36-32\python.exe

(cntk-py36) C:\CNTK-Samples-2-4\Examples\Image\TransferLearning>C:\Users\Dell\Anaconda3\python.exe TransferLearning.py

 


C:\CNTK-Samples-2-4\Examples\Image\TransferLearning>python TransferLearning.py
Selected GPU[0] GeForce GTX 960M as the process wide default device.
-------------------------------------------------------------------
Build info:

Built time: Jan 31 2018 14:57:35
 Last modified date: Wed Jan 31 01:10:27 2018
 Build type: Release
 Build target: GPU
 With 1bit-SGD: no
 With ASGD: yes
 Math lib: mkl
 CUDA version: 9.0.0
 CUDNN version: 7.0.5
 Build Branch: HEAD
 Build SHA1: a70455c7abe76596853f8e6a77a4d6de1e3ba76e
 MPI distribution: Microsoft MPI
 MPI version: 7.0.12437.6
-------------------------------------------------------------------
Training transfer learning model for 20 epochs (epoch_size = 6149).
Training 15949478 parameters in 68 parameter tensors.
CUDA failure 2: out of memory ; GPU=0 ; hostname=DESKTOP-IA3HLGI ; expr=cudaMalloc((void**) &deviceBufferPtr, sizeof(AllocatedElemType) * AsMultipleOf(numElements, 2))
Traceback (most recent call last):
 File "TransferLearning.py", line 217, in <module>
 max_epochs, freeze=freeze_weights)
 File "TransferLearning.py", line 130, in train_model
 trainer.train_minibatch(data) # update model with it
 File "C:\Users\Dell\Anaconda3\lib\site-packages\cntk\train\trainer.py", line 181, in train_minibatch
 arguments, device)
 File "C:\Users\Dell\Anaconda3\lib\site-packages\cntk\cntk_py.py", line 2975, in train_minibatch_overload_for_minibatchdata
 return _cntk_py.Trainer_train_minibatch_overload_for_minibatchdata(self, *args)
RuntimeError: CUDA failure 2: out of memory ; GPU=0 ; hostname=DESKTOP-IA3HLGI ; expr=cudaMalloc((void**) &deviceBufferPtr, sizeof(AllocatedElemType) * AsMultipleOf(numElements, 2))

[CALL STACK]
 > Microsoft::MSR::CNTK::CudaTimer:: Stop
 - Microsoft::MSR::CNTK::CudaTimer:: Stop (x2)
 - Microsoft::MSR::CNTK::GPUMatrix<float>:: Resize
 - Microsoft::MSR::CNTK::Matrix<float>:: Resize
 - std::enable_shared_from_this<Microsoft::MSR::CNTK::MatrixBase>::enable_shared_from_this<Microsoft::MSR::CNTK::MatrixBase>
 - std::enable_shared_from_this<Microsoft::MSR::CNTK::MatrixBase>:: shared_from_this (x3)
 - CNTK::Internal:: UseSparseGradientAggregationInDataParallelSGD
 - CNTK:: CreateTrainer
 - CNTK::Trainer:: TotalNumberOfUnitsSeen
 - CNTK::Trainer:: TrainMinibatch (x2)
 - PyInit__cntk_py (x2)

I run into a problem with memory. My graphics card has a dedicated memory of 2Gb  as you can see below:

Apparently that isn’t enough. Next question I have is does CUDA use dedicated GPU memory or shared system memory? As my graphics card has 3.9 GB of that:

So my total Graphics memory is 5.9 GB

 

It turns out that I am running out of Dedicated GPU memory not shared system memory.

I couldn’t find nany GPU setting but I tied reducing the mini batch size and that the trick. In the transferlearning.py  source code on line 41 and changing the the variable as ‘mb_size = 50’ to ‘mb_size = 30’ did the trick. Your mileage may vary based on  your GPU so do experiment.

Learning and working with CNTK(2.4)

February 22, 2018

I just got introduced to CNTK and this my attempt to learn the CNTK from the ground up.

I liked python with CNTK to run on my laptop core i5 with no Discrete graphic card. You dont need an Azure subscription to use Microsoft CNTK!! But if you dont have a discrete Graphics card (dGPU) then it may help to speed things up :

So I used link here to install and setup CNTK

https://docs.microsoft.com/en-us/cognitive-toolkit/setup-windows-python?tabs=cntkpy24

 From the very beginning

Install python 3.6

https://www.python.org/ftp/python/3.6.4/python-3.6.4.exe

C:\WINDOWS\system32>python –version
Python 3.6.4

 

 

You will also need to install anaconda

Anaconda3

We have been testing CNTK with Anaconda3 4.1.1 (64-bit) and Python versions 2.7 and 3.5, as well as Anaconda3 4.3.1 with Python version 3.6. If you do not have a Anaconda3 Python installation, install Anaconda3 4.1.1 Python for Windows (64-bit).

Below we assume Anaconda is installed and that it is listed before any other Python installations in your PATH. If you plan on using a GPU enabled version of CNTK, you will need a CUDA 9 compliant graphics card and up-to-date graphics drivers installed on your system.

Above link has all the next steps..

Anaconda for python 3.6

https://repo.continuum.io/archive/Anaconda3-5.1.0-Windows-x86_64.exe

 

install with python 3.6 with CPU only

C:\>pip install https://cntk.ai/PythonWheel/CPU-Only/cntk-2.4-cp36-cp36m-win_amd64.whl

 Collecting cntk==2.4 from https://cntk.ai/PythonWheel/CPU-Only/cntk-2.4-cp36-cp36m-win_amd64.whl
 Downloading https://cntk.ai/PythonWheel/CPU-Only/cntk-2.4-cp36-cp36m-win_amd64.whl (71.4MB)
 100% |████████████████████████████████| 71.5MB 17kB/s
 Collecting scipy>=0.17 (from cntk==2.4)
 Downloading scipy-1.0.0-cp36-none-win_amd64.whl (30.8MB)
 100% |████████████████████████████████| 30.8MB 35kB/s
 Collecting numpy>=1.11 (from cntk==2.4)
 Downloading numpy-1.14.0-cp36-none-win_amd64.whl (13.4MB)
 100% |████████████████████████████████| 13.4MB 78kB/s
 Installing collected packages: numpy, scipy, cntk
 Successfully installed cntk-2.4 numpy-1.14.0 scipy-1.0.0

GPU version 

C:\WINDOWS\system32>pip install https://cntk.ai/PythonWheel/GPU-1bit-SGD/cntk-2.4-cp36-cp36m-win_amd64.whl

cntk-2.0-cp36-cp36m-win_amd64.whl is not a supported wheel on this platform.

 

Installation failed – the reason was because I needed to install anaconda version matchin python 3.6 before the CNTK install.  After I did that it worked.

C:\>pip install https://cntk.ai/PythonWheel/GPU/cntk-2.4-cp36-cp36m-win_amd64.whl

Collecting cntk==2.4 from https://cntk.ai/PythonWheel/GPU/cntk-2.4-cp36-cp36m-win_amd64.whl
 Downloading https://cntk.ai/PythonWheel/GPU/cntk-2.4-cp36-cp36m-win_amd64.whl (436.3MB)
 100% |████████████████████████████████| 436.3MB 44kB/s
Requirement already satisfied: scipy>=0.17 in c:\users\dell\anaconda3\lib\site-packages (from cntk==2.4)
Requirement already satisfied: numpy>=1.11 in c:\users\dell\anaconda3\lib\site-packages (from cntk==2.4)
Installing collected packages: cntk
Successfully installed cntk-2.4
Quick installation test

A quick test that the installation succeeded can be done by querying the CNTK version

C:\Python27>python -c “import cntk; print(cntk.__version__)”

2.4

 

 

C:\Users\Dell>conda create –name cntk-py36 python=3.6 numpy scipy h5py jupyter

Fetching package metadata .......
 Solving package specifications: ..........

Package plan for installation in environment C:\Users\Dell\Anaconda3\envs\cntk-py36:

The following packages will be downloaded:

package | build
 ---------------------------|-----------------
 mkl-2017.0.3 | 0 126.3 MB
 vs2015_runtime-14.0.25420 | 0 2.0 MB
 vc-14 | 0 703 B
 icu-57.1 | vc14_0 34.2 MB
 jpeg-9b | vc14_0 304 KB
 openssl-1.0.2l | vc14_0 5.1 MB
 python-3.6.2 | 0 31.5 MB
 zlib-1.2.11 | vc14_0 119 KB
 certifi-2016.2.28 | py36_0 214 KB
 colorama-0.3.9 | py36_0 22 KB
 decorator-4.1.2 | py36_0 15 KB
 entrypoints-0.2.3 | py36_0 10 KB
 ipython_genutils-0.2.0 | py36_0 38 KB
 jedi-0.10.2 | py36_2 246 KB
 jsonschema-2.6.0 | py36_0 103 KB
 libpng-1.6.30 | vc14_1 503 KB
 markupsafe-1.0 | py36_0 28 KB
 mistune-0.7.4 | py36_0 148 KB
 numpy-1.13.1 | py36_0 3.6 MB
 pandocfilters-1.4.2 | py36_0 13 KB
 path.py-10.3.1 | py36_0 51 KB
 pygments-2.2.0 | py36_0 1.4 MB
 pyzmq-16.0.2 | py36_0 539 KB
 simplegeneric-0.8.1 | py36_1 8 KB
 sip-4.18 | py36_0 270 KB
 six-1.10.0 | py36_0 20 KB
 testpath-0.3.1 | py36_0 15 KB
 tornado-4.5.2 | py36_0 631 KB
 wcwidth-0.1.7 | py36_0 24 KB
 wheel-0.29.0 | py36_0 129 KB
 wincertstore-0.2 | py36_0 14 KB
 h5py-2.7.0 | np113py36_0 720 KB
 html5lib-0.9999999 | py36_0 178 KB
 pickleshare-0.7.4 | py36_0 11 KB
 prompt_toolkit-1.0.15 | py36_0 340 KB
 python-dateutil-2.6.1 | py36_0 238 KB
 qt-5.6.2 | vc14_6 55.5 MB
 scipy-0.19.1 | np113py36_0 13.1 MB
 setuptools-36.4.0 | py36_1 534 KB
 traitlets-4.3.2 | py36_0 130 KB
 bleach-1.5.0 | py36_0 22 KB
 ipython-6.1.0 | py36_0 1.0 MB
 jinja2-2.9.6 | py36_0 392 KB
 jupyter_core-4.3.0 | py36_0 112 KB
 pip-9.0.1 | py36_1 1.7 MB
 pyqt-5.6.0 | py36_2 4.5 MB
 jupyter_client-5.1.0 | py36_0 139 KB
 nbformat-4.4.0 | py36_0 137 KB
 ipykernel-4.6.1 | py36_0 137 KB
 nbconvert-5.2.1 | py36_0 411 KB
 jupyter_console-5.2.0 | py36_0 52 KB
 notebook-5.0.0 | py36_0 5.4 MB
 qtconsole-4.3.1 | py36_0 197 KB
 widgetsnbextension-3.0.2 | py36_0 2.0 MB
 ipywidgets-6.0.0 | py36_0 65 KB
 jupyter-1.0.0 | py36_3 4 KB
 ------------------------------------------------------------
 Total: 294.4 MB

The following NEW packages will be INSTALLED:

bleach: 1.5.0-py36_0
 certifi: 2016.2.28-py36_0
 colorama: 0.3.9-py36_0
 decorator: 4.1.2-py36_0
 entrypoints: 0.2.3-py36_0
 h5py: 2.7.0-np113py36_0
 hdf5: 1.8.15.1-vc14_4
 html5lib: 0.9999999-py36_0
 icu: 57.1-vc14_0
 ipykernel: 4.6.1-py36_0
 ipython: 6.1.0-py36_0
 ipython_genutils: 0.2.0-py36_0
 ipywidgets: 6.0.0-py36_0
 jedi: 0.10.2-py36_2
 jinja2: 2.9.6-py36_0
 jpeg: 9b-vc14_0
 jsonschema: 2.6.0-py36_0
 jupyter: 1.0.0-py36_3
 jupyter_client: 5.1.0-py36_0
 jupyter_console: 5.2.0-py36_0
 jupyter_core: 4.3.0-py36_0
 libpng: 1.6.30-vc14_1
 markupsafe: 1.0-py36_0
 mistune: 0.7.4-py36_0
 mkl: 2017.0.3-0
 nbconvert: 5.2.1-py36_0
 nbformat: 4.4.0-py36_0
 notebook: 5.0.0-py36_0
 numpy: 1.13.1-py36_0
 openssl: 1.0.2l-vc14_0
 pandocfilters: 1.4.2-py36_0
 path.py: 10.3.1-py36_0
 pickleshare: 0.7.4-py36_0
 pip: 9.0.1-py36_1
 prompt_toolkit: 1.0.15-py36_0
 pygments: 2.2.0-py36_0
 pyqt: 5.6.0-py36_2
 python: 3.6.2-0
 python-dateutil: 2.6.1-py36_0
 pyzmq: 16.0.2-py36_0
 qt: 5.6.2-vc14_6
 qtconsole: 4.3.1-py36_0
 scipy: 0.19.1-np113py36_0
 setuptools: 36.4.0-py36_1
 simplegeneric: 0.8.1-py36_1
 sip: 4.18-py36_0
 six: 1.10.0-py36_0
 testpath: 0.3.1-py36_0
 tornado: 4.5.2-py36_0
 traitlets: 4.3.2-py36_0
 vc: 14-0
 vs2015_runtime: 14.0.25420-0
 wcwidth: 0.1.7-py36_0
 wheel: 0.29.0-py36_0
 widgetsnbextension: 3.0.2-py36_0
 wincertstore: 0.2-py36_0
 zlib: 1.2.11-vc14_0

Proceed ([y]/n)? y

Fetching packages ...
 mkl-2017.0.3-0 100% |###############################| Time: 0:00:38 3.44 MB/s
 vs2015_runtime 100% |###############################| Time: 0:00:00 3.71 MB/s
 vc-14-0.tar.bz 100% |###############################| Time: 0:00:00 128.68 kB/s
 icu-57.1-vc14_ 100% |###############################| Time: 0:00:12 2.92 MB/s
 jpeg-9b-vc14_0 100% |###############################| Time: 0:00:00 3.68 MB/s
 openssl-1.0.2l 100% |###############################| Time: 0:00:01 3.70 MB/s
 python-3.6.2-0 100% |###############################| Time: 0:00:08 3.68 MB/s
 zlib-1.2.11-vc 100% |###############################| Time: 0:00:00 2.04 MB/s
 certifi-2016.2 100% |###############################| Time: 0:00:00 1.84 MB/s
 colorama-0.3.9 100% |###############################| Time: 0:00:00 813.05 kB/s
 decorator-4.1. 100% |###############################| Time: 0:00:00 830.97 kB/s
 entrypoints-0. 100% |###############################| Time: 0:00:00 1.22 MB/s
 ipython_genuti 100% |###############################| Time: 0:00:00 2.30 MB/s
 jedi-0.10.2-py 100% |###############################| Time: 0:00:00 3.71 MB/s
 jsonschema-2.6 100% |###############################| Time: 0:00:00 3.56 MB/s
 libpng-1.6.30- 100% |###############################| Time: 0:00:00 3.69 MB/s
 markupsafe-1.0 100% |###############################| Time: 0:00:00 1.88 MB/s
 mistune-0.7.4- 100% |###############################| Time: 0:00:00 3.53 MB/s
 numpy-1.13.1-p 100% |###############################| Time: 0:00:01 3.69 MB/s
 pandocfilters- 100% |###############################| Time: 0:00:00 1.84 MB/s
 path.py-10.3.1 100% |###############################| Time: 0:00:00 2.92 MB/s
 pygments-2.2.0 100% |###############################| Time: 0:00:00 3.69 MB/s
 pyzmq-16.0.2-p 100% |###############################| Time: 0:00:00 3.71 MB/s
 simplegeneric- 100% |###############################| Time: 0:00:00 1.12 MB/s
 sip-4.18-py36_ 100% |###############################| Time: 0:00:00 1.02 MB/s
 six-1.10.0-py3 100% |###############################| Time: 0:00:00 1.54 MB/s
 testpath-0.3.1 100% |###############################| Time: 0:00:00 1.84 MB/s
 tornado-4.5.2- 100% |###############################| Time: 0:00:00 3.75 MB/s
 wcwidth-0.1.7- 100% |###############################| Time: 0:00:00 1.47 MB/s
 wheel-0.29.0-p 100% |###############################| Time: 0:00:00 3.01 MB/s
 wincertstore-0 100% |###############################| Time: 0:00:00 1.25 MB/s
 h5py-2.7.0-np1 100% |###############################| Time: 0:00:00 3.74 MB/s
 html5lib-0.999 100% |###############################| Time: 0:00:00 4.14 MB/s
 pickleshare-0. 100% |###############################| Time: 0:00:00 1.64 MB/s
 prompt_toolkit 100% |###############################| Time: 0:00:00 3.76 MB/s
 python-dateuti 100% |###############################| Time: 0:00:00 3.87 MB/s
 qt-5.6.2-vc14_ 100% |###############################| Time: 0:00:16 3.53 MB/s
 scipy-0.19.1-n 100% |###############################| Time: 0:00:03 3.69 MB/s
 setuptools-36. 100% |###############################| Time: 0:00:00 3.81 MB/s
 traitlets-4.3. 100% |###############################| Time: 0:00:00 3.91 MB/s
 bleach-1.5.0-p 100% |###############################| Time: 0:00:00 2.24 MB/s
 ipython-6.1.0- 100% |###############################| Time: 0:00:00 3.71 MB/s
 jinja2-2.9.6-p 100% |###############################| Time: 0:00:00 3.84 MB/s
 jupyter_core-4 100% |###############################| Time: 0:00:00 3.32 MB/s
 pip-9.0.1-py36 100% |###############################| Time: 0:00:00 3.68 MB/s
 pyqt-5.6.0-py3 100% |###############################| Time: 0:00:01 2.62 MB/s
 jupyter_client 100% |###############################| Time: 0:00:00 3.44 MB/s
 nbformat-4.4.0 100% |###############################| Time: 0:00:00 3.18 MB/s
 ipykernel-4.6. 100% |###############################| Time: 0:00:00 3.05 MB/s
 nbconvert-5.2. 100% |###############################| Time: 0:00:00 2.42 MB/s
 jupyter_consol 100% |###############################| Time: 0:00:00 2.60 MB/s
 notebook-5.0.0 100% |###############################| Time: 0:00:01 3.69 MB/s
 qtconsole-4.3. 100% |###############################| Time: 0:00:00 3.41 MB/s
 widgetsnbexten 100% |###############################| Time: 0:00:00 3.69 MB/s
 ipywidgets-6.0 100% |###############################| Time: 0:00:00 3.25 MB/s
 jupyter-1.0.0- 100% |###############################| Time: 0:00:00 564.69 kB/s
 Extracting packages ...
 [ COMPLETE ]|##################################################| 100%
 Linking packages ...
 1 file(s) copied.############################## | 64%
 Active code page: 437
 [ COMPLETE ]|##################################################| 100%
 #
 # To activate this environment, use:
 # > activate cntk-py36
 #
 # To deactivate this environment, use:
 # > deactivate
 #

 

Installing Samples and Tutorials

We provide various samples and tutorials with CNTK. After you installed CNTK you can install the samples/tutorials and Jupyter notebooks. If you installed CNTK into a Python environment, make sure you activated the environment before running this command:

Install samples 

I highly recommend doing this step rather than downloadign them separately from Github. The reason is that when it install the samples it also install the prerequisites.

 

(cntk-py36) C:\Users\Dell\Downloads\CNTK-master\CNTK-master\Examples\Image\TransferLearning>C:\Users\Dell\AppData\Local\Programs\Python\Python36\python -m cntk.sample_installer

 
C:\Users\Dell\AppData\Local\Programs\Python\Python36\lib\runpy.py:125: RuntimeWarning: 'cntk.sample_installer' found in sys.modules after import of package 'cntk', but prior to execution of 'cntk.sample_installer'; this may result in unpredictable behaviour
 warn(RuntimeWarning(msg))
 INFO: retrieving https://cntk.ai/Samples/CNTK-Samples-2-4.zip

INFO: unzipping to directory CNTK-Samples-2-4

INFO: installing requirements

Collecting h5py>=2.6.0 (from -r CNTK-Samples-2-4\requirements.txt (line 1))
 Downloading h5py-2.7.1-cp36-cp36m-win_amd64.whl (2.3MB)
 100% |████████████████████████████████| 2.3MB 468kB/s
 Collecting jupyter>=1.0.0 (from -r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading jupyter-1.0.0-py2.py3-none-any.whl
 Collecting matplotlib>=1.5.3 (from -r CNTK-Samples-2-4\requirements.txt (line 3))
 Downloading matplotlib-2.1.2-cp36-cp36m-win_amd64.whl (8.7MB)
 100% |████████████████████████████████| 8.7MB 125kB/s
 Collecting pandas>=0.19.1 (from -r CNTK-Samples-2-4\requirements.txt (line 4))
 Downloading pandas-0.22.0-cp36-cp36m-win_amd64.whl (9.1MB)
 100% |████████████████████████████████| 9.1MB 128kB/s
 Collecting pandas-datareader>=0.2.1 (from -r CNTK-Samples-2-4\requirements.txt (line 5))
 Downloading pandas_datareader-0.6.0-py2.py3-none-any.whl (103kB)
 100% |████████████████████████████████| 112kB 3.3MB/s
 Collecting pillow>=3.4.2 (from -r CNTK-Samples-2-4\requirements.txt (line 6))
 Using cached Pillow-5.0.0-cp36-cp36m-win_amd64.whl
 Requirement already satisfied: pip>=8.1.2 in c:\users\dell\appdata\local\programs\python\python36\lib\site-packages (from -r CNTK-Samples-2-4\requirements.txt (line 7))
 Collecting seaborn>=0.7.1 (from -r CNTK-Samples-2-4\requirements.txt (line 8))
 Downloading seaborn-0.8.1.tar.gz (178kB)
 100% |████████████████████████████████| 184kB 3.3MB/s
 Collecting six>=1.10.0 (from -r CNTK-Samples-2-4\requirements.txt (line 9))
 Downloading six-1.11.0-py2.py3-none-any.whl
 Collecting gym>=0.5.2 (from -r CNTK-Samples-2-4\requirements.txt (line 10))
 Downloading gym-0.9.7.tar.gz (108kB)
 100% |████████████████████████████████| 112kB 3.3MB/s
 Requirement already satisfied: numpy>=1.7 in c:\users\dell\appdata\local\programs\python\python36\lib\site-packages (from h5py>=2.6.0->-r CNTK-Samples-2-4\requirements.txt (line 1))
 Collecting ipywidgets (from jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading ipywidgets-7.1.2-py2.py3-none-any.whl (68kB)
 100% |████████████████████████████████| 71kB 2.3MB/s
 Collecting ipykernel (from jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading ipykernel-4.8.2-py3-none-any.whl (108kB)
 100% |████████████████████████████████| 112kB 6.6MB/s
 Collecting jupyter-console (from jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading jupyter_console-5.2.0-py2.py3-none-any.whl
 Collecting qtconsole (from jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading qtconsole-4.3.1-py2.py3-none-any.whl (108kB)
 100% |████████████████████████████████| 112kB 2.2MB/s
 Collecting notebook (from jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading notebook-5.4.0-py2.py3-none-any.whl (8.0MB)
 100% |████████████████████████████████| 8.0MB 142kB/s
 Collecting nbconvert (from jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading nbconvert-5.3.1-py2.py3-none-any.whl (387kB)
 100% |████████████████████████████████| 389kB 1.6MB/s
 Collecting cycler>=0.10 (from matplotlib>=1.5.3->-r CNTK-Samples-2-4\requirements.txt (line 3))
 Downloading cycler-0.10.0-py2.py3-none-any.whl
 Collecting pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 (from matplotlib>=1.5.3->-r CNTK-Samples-2-4\requirements.txt (line 3))
 Downloading pyparsing-2.2.0-py2.py3-none-any.whl (56kB)
 100% |████████████████████████████████| 61kB 3.9MB/s
 Collecting python-dateutil>=2.1 (from matplotlib>=1.5.3->-r CNTK-Samples-2-4\requirements.txt (line 3))
 Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
 100% |████████████████████████████████| 194kB 3.3MB/s
 Collecting pytz (from matplotlib>=1.5.3->-r CNTK-Samples-2-4\requirements.txt (line 3))
 Using cached pytz-2018.3-py2.py3-none-any.whl
 Collecting wrapt (from pandas-datareader>=0.2.1->-r CNTK-Samples-2-4\requirements.txt (line 5))
 Downloading wrapt-1.10.11.tar.gz
 Collecting lxml (from pandas-datareader>=0.2.1->-r CNTK-Samples-2-4\requirements.txt (line 5))
 Downloading lxml-4.1.1-cp36-cp36m-win_amd64.whl (3.5MB)
 100% |████████████████████████████████| 3.6MB 186kB/s
 Collecting requests>=2.3.0 (from pandas-datareader>=0.2.1->-r CNTK-Samples-2-4\requirements.txt (line 5))
 Downloading requests-2.18.4-py2.py3-none-any.whl (88kB)
 100% |████████████████████████████████| 92kB 5.9MB/s
 Collecting requests-file (from pandas-datareader>=0.2.1->-r CNTK-Samples-2-4\requirements.txt (line 5))
 Downloading requests_file-1.4.3-py2.py3-none-any.whl
 Collecting requests-ftp (from pandas-datareader>=0.2.1->-r CNTK-Samples-2-4\requirements.txt (line 5))
 Downloading requests-ftp-0.3.1.tar.gz
 Collecting pyglet>=1.2.0 (from gym>=0.5.2->-r CNTK-Samples-2-4\requirements.txt (line 10))
 Downloading pyglet-1.3.1-py2.py3-none-any.whl (1.0MB)
 100% |████████████████████████████████| 1.0MB 936kB/s
 Collecting widgetsnbextension~=3.1.0 (from ipywidgets->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading widgetsnbextension-3.1.4-py2.py3-none-any.whl (2.2MB)
 100% |████████████████████████████████| 2.2MB 471kB/s
 Collecting ipython>=4.0.0; python_version >= "3.3" (from ipywidgets->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading ipython-6.2.1-py3-none-any.whl (745kB)
 100% |████████████████████████████████| 747kB 334kB/s
 Collecting nbformat>=4.2.0 (from ipywidgets->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading nbformat-4.4.0-py2.py3-none-any.whl (155kB)
 100% |████████████████████████████████| 163kB 2.4MB/s
 Collecting traitlets>=4.3.1 (from ipywidgets->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading traitlets-4.3.2-py2.py3-none-any.whl (74kB)
 100% |████████████████████████████████| 81kB 2.3MB/s
 Collecting jupyter-client (from ipykernel->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading jupyter_client-5.2.2-py2.py3-none-any.whl (88kB)
 100% |████████████████████████████████| 92kB 1.8MB/s
 Collecting tornado>=4.0 (from ipykernel->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading tornado-4.5.3-cp36-cp36m-win_amd64.whl (423kB)
 100% |████████████████████████████████| 430kB 1.6MB/s
 Collecting prompt-toolkit<2.0.0,>=1.0.0 (from jupyter-console->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading prompt_toolkit-1.0.15-py3-none-any.whl (247kB)
 100% |████████████████████████████████| 256kB 2.0MB/s
 Collecting pygments (from jupyter-console->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading Pygments-2.2.0-py2.py3-none-any.whl (841kB)
 100% |████████████████████████████████| 849kB 867kB/s
 Collecting ipython-genutils (from qtconsole->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading ipython_genutils-0.2.0-py2.py3-none-any.whl
 Collecting jupyter-core (from qtconsole->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading jupyter_core-4.4.0-py2.py3-none-any.whl (126kB)
 100% |████████████████████████████████| 133kB 1.3MB/s
 Collecting Send2Trash (from notebook->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading Send2Trash-1.5.0-py3-none-any.whl
 Collecting jinja2 (from notebook->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading Jinja2-2.10-py2.py3-none-any.whl (126kB)
 100% |████████████████████████████████| 133kB 2.2MB/s
 Collecting terminado>=0.8.1 (from notebook->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading terminado-0.8.1-py2.py3-none-any.whl
 Collecting bleach (from nbconvert->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading bleach-2.1.2-py2.py3-none-any.whl
 Collecting pandocfilters>=1.4.1 (from nbconvert->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading pandocfilters-1.4.2.tar.gz
 Collecting testpath (from nbconvert->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading testpath-0.3.1-py2.py3-none-any.whl (161kB)
 100% |████████████████████████████████| 163kB 3.3MB/s
 Collecting mistune>=0.7.4 (from nbconvert->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading mistune-0.8.3-py2.py3-none-any.whl
 Collecting entrypoints>=0.2.2 (from nbconvert->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading entrypoints-0.2.3-py2.py3-none-any.whl
 Collecting certifi>=2017.4.17 (from requests>=2.3.0->pandas-datareader>=0.2.1->-r CNTK-Samples-2-4\requirements.txt (line 5))
 Downloading certifi-2018.1.18-py2.py3-none-any.whl (151kB)
 100% |████████████████████████████████| 153kB 2.3MB/s
 Collecting chardet<3.1.0,>=3.0.2 (from requests>=2.3.0->pandas-datareader>=0.2.1->-r CNTK-Samples-2-4\requirements.txt (line 5))
 Downloading chardet-3.0.4-py2.py3-none-any.whl (133kB)
 100% |████████████████████████████████| 143kB 1.8MB/s
 Collecting idna<2.7,>=2.5 (from requests>=2.3.0->pandas-datareader>=0.2.1->-r CNTK-Samples-2-4\requirements.txt (line 5))
 Downloading idna-2.6-py2.py3-none-any.whl (56kB)
 100% |████████████████████████████████| 61kB 1.1MB/s
 Collecting urllib3<1.23,>=1.21.1 (from requests>=2.3.0->pandas-datareader>=0.2.1->-r CNTK-Samples-2-4\requirements.txt (line 5))
 Downloading urllib3-1.22-py2.py3-none-any.whl (132kB)
 100% |████████████████████████████████| 133kB 2.7MB/s
 Collecting future (from pyglet>=1.2.0->gym>=0.5.2->-r CNTK-Samples-2-4\requirements.txt (line 10))
 Downloading future-0.16.0.tar.gz (824kB)
 100% |████████████████████████████████| 829kB 1.2MB/s
 Requirement already satisfied: setuptools>=18.5 in c:\users\dell\appdata\local\programs\python\python36\lib\site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Collecting decorator (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading decorator-4.2.1-py2.py3-none-any.whl
 Collecting pickleshare (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading pickleshare-0.7.4-py2.py3-none-any.whl
 Collecting jedi>=0.10 (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading jedi-0.11.1-py2.py3-none-any.whl (250kB)
 100% |████████████████████████████████| 256kB 1.4MB/s
 Collecting colorama; sys_platform == "win32" (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading colorama-0.3.9-py2.py3-none-any.whl
 Collecting simplegeneric>0.8 (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading simplegeneric-0.8.1.zip
 Collecting jsonschema!=2.5.0,>=2.4 (from nbformat>=4.2.0->ipywidgets->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading jsonschema-2.6.0-py2.py3-none-any.whl
 Collecting pyzmq>=13 (from jupyter-client->ipykernel->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading pyzmq-17.0.0-cp36-cp36m-win_amd64.whl (944kB)
 100% |████████████████████████████████| 952kB 981kB/s
 Collecting wcwidth (from prompt-toolkit<2.0.0,>=1.0.0->jupyter-console->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading wcwidth-0.1.7-py2.py3-none-any.whl
 Collecting MarkupSafe>=0.23 (from jinja2->notebook->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading MarkupSafe-1.0.tar.gz
 Collecting pywinpty>=0.5; os_name == "nt" (from terminado>=0.8.1->notebook->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading pywinpty-0.5.1-cp36-cp36m-win_amd64.whl (176kB)
 100% |████████████████████████████████| 184kB 1.6MB/s
 Collecting html5lib!=1.0b1,!=1.0b2,!=1.0b3,!=1.0b4,!=1.0b5,!=1.0b6,!=1.0b7,!=1.0b8,>=0.99999999pre (from bleach->nbconvert->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading html5lib-1.0.1-py2.py3-none-any.whl (117kB)
 100% |████████████████████████████████| 122kB 2.4MB/s
 Collecting parso==0.1.1 (from jedi>=0.10->ipython>=4.0.0; python_version >= "3.3"->ipywidgets->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading parso-0.1.1-py2.py3-none-any.whl (91kB)
 100% |████████████████████████████████| 92kB 2.9MB/s
 Collecting webencodings (from html5lib!=1.0b1,!=1.0b2,!=1.0b3,!=1.0b4,!=1.0b5,!=1.0b6,!=1.0b7,!=1.0b8,>=0.99999999pre->bleach->nbconvert->jupyter>=1.0.0->-r CNTK-Samples-2-4\requirements.txt (line 2))
 Downloading webencodings-0.5.1-py2.py3-none-any.whl
 Installing collected packages: six, h5py, ipython-genutils, decorator, pickleshare, parso, jedi, traitlets, colorama, wcwidth, prompt-toolkit, pygments, simplegeneric, ipython, python-dateutil, pyzmq, jupyter-core, tornado, jupyter-client, ipykernel, jsonschema, nbformat, Send2Trash, MarkupSafe, jinja2, pywinpty, terminado, webencodings, html5lib, bleach, pandocfilters, testpath, mistune, entrypoints, nbconvert, notebook, widgetsnbextension, ipywidgets, jupyter-console, qtconsole, jupyter, cycler, pyparsing, pytz, matplotlib, pandas, wrapt, lxml, certifi, chardet, idna, urllib3, requests, requests-file, requests-ftp, pandas-datareader, pillow, seaborn, future, pyglet, gym
 Running setup.py install for simplegeneric ... done
 Running setup.py install for MarkupSafe ... done
 Running setup.py install for pandocfilters ... done
 Running setup.py install for wrapt ... done
 Running setup.py install for requests-ftp ... done
 Running setup.py install for seaborn ... done
 Running setup.py install for future ... done
 Running setup.py install for gym ... done
 Successfully installed MarkupSafe-1.0 Send2Trash-1.5.0 bleach-2.1.2 certifi-2018.1.18 chardet-3.0.4 colorama-0.3.9 cycler-0.10.0 decorator-4.2.1 entrypoints-0.2.3 future-0.16.0 gym-0.9.7 h5py-2.7.1 html5lib-1.0.1 idna-2.6 ipykernel-4.8.2 ipython-6.2.1 ipython-genutils-0.2.0 ipywidgets-7.1.2 jedi-0.11.1 jinja2-2.10 jsonschema-2.6.0 jupyter-1.0.0 jupyter-client-5.2.2 jupyter-console-5.2.0 jupyter-core-4.4.0 lxml-4.1.1 matplotlib-2.1.2 mistune-0.8.3 nbconvert-5.3.1 nbformat-4.4.0 notebook-5.4.0 pandas-0.22.0 pandas-datareader-0.6.0 pandocfilters-1.4.2 parso-0.1.1 pickleshare-0.7.4 pillow-5.0.0 prompt-toolkit-1.0.15 pyglet-1.3.1 pygments-2.2.0 pyparsing-2.2.0 python-dateutil-2.6.1 pytz-2018.3 pywinpty-0.5.1 pyzmq-17.0.0 qtconsole-4.3.1 requests-2.18.4 requests-file-1.4.3 requests-ftp-0.3.1 seaborn-0.8.1 simplegeneric-0.8.1 six-1.11.0 terminado-0.8.1 testpath-0.3.1 tornado-4.5.3 traitlets-4.3.2 urllib3-1.22 wcwidth-0.1.7 webencodings-0.5.1 widgetsnbextension-3.1.4 wrapt-1.10.11


Build a sample 

I got this error when I downloaded the samples from githuib. The environment wasnt setup correctly.

 

(cntk-py36) C:\Users\Dell\Downloads\CNTK-master\CNTK-master\Examples\Image\TransferLearning>py TransferLearning.py

 Traceback (most recent call last):
 File "TransferLearning.py", line 11, in <module>
 from pillow import Image
 ModuleNotFoundError: No module named 'pillow'

I tried installing image, pillow and PIL but that didnt help!

pip install pillow

 

(cntk-py36) C:\Users\Dell\Downloads\CNTK-master\CNTK-master\Examples\Image\TransferLearning>pip install image

Collecting image
Downloading image-1.5.19-py2.py3-none-any.whl
Requirement already satisfied: pillow in c:\users\dell\anaconda3\envs\cntk-py36\lib\site-packages (from image)
Collecting django (from image)
Downloading Django-2.0.2-py3-none-any.whl (7.1MB)
100% |████████████████████████████████| 7.1MB 148kB/s
Collecting pytz (from django->image)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
100% |████████████████████████████████| 512kB 973kB/s
Installing collected packages: pytz, django, image
Successfully installed django-2.0.2 image-1.5.19 pytz-2018.3

 

(cntk-py36) C:\Users\Dell\Downloads\CNTK-master\CNTK-master\Examples\Image\TransferLearning>conda install pillow

Fetching package metadata .......
 Solving package specifications: ..........

Package plan for installation in environment C:\Users\Dell\Anaconda3\envs\cntk-py36:

The following packages will be downloaded:

package | build
 ---------------------------|-----------------
 libtiff-4.0.6 | vc14_3 466 KB
 olefile-0.44 | py36_0 52 KB
 freetype-2.5.5 | vc14_2 627 KB
 pillow-4.2.1 | py36_0 971 KB
 ------------------------------------------------------------
 Total: 2.1 MB

The following NEW packages will be INSTALLED:

bzip2: 1.0.6-vc14_3
 freetype: 2.5.5-vc14_2
 libtiff: 4.0.6-vc14_3
 olefile: 0.44-py36_0
 pillow: 4.2.1-py36_0

Proceed ([y]/n)? y

Fetching packages ...
 libtiff-4.0.6- 100% |###############################| Time: 0:00:00 4.13 MB/s
 olefile-0.44-p 100% |###############################| Time: 0:00:00 2.47 MB/s
 freetype-2.5.5 100% |###############################| Time: 0:00:00 3.80 MB/s
 pillow-4.2.1-p 100% |###############################| Time: 0:00:00 3.78 MB/s
 Extracting packages ...
 [ COMPLETE ]|##################################################| 100%
 Linking packages ...
 [ COMPLETE ]|##################################################| 100%

 

(cntk-py36) C:\Users\Dell\Downloads\CNTK-master\CNTK-master\Examples\Image\TransferLearning>conda info pil

 Fetching package metadata .......

pil 1.1.7 py26_0
 ----------------
 file name : pil-1.1.7-py26_0.tar.bz2
 name : pil
 version : 1.1.7
 build number: 0
 build string: py26_0
 channel : defaults
 size : 746 KB
 date : 2014-01-01
 fn : pil-1.1.7-py26_0.tar.bz2
 license_family: Other
 md5 : ebf2863cd37405f13d6096d1981e188d
 priority : 1
 schannel : defaults
 url : https://repo.continuum.io/pkgs/free/win-64/pil-1.1.7-py26_0.tar.bz2
 dependencies:
 python 2.6*

pil 1.1.7 py27_0
 ----------------
 file name : pil-1.1.7-py27_0.tar.bz2
 name : pil
 version : 1.1.7
 build number: 0
 build string: py27_0
 channel : defaults
 size : 746 KB
 date : 2014-01-01
 fn : pil-1.1.7-py27_0.tar.bz2
 license_family: Other
 md5 : 05e217d1ecfa9636a92af6ea8cfbd409
 priority : 1
 schannel : defaults
 url : https://repo.continuum.io/pkgs/free/win-64/pil-1.1.7-py27_0.tar.bz2
 dependencies:
 python 2.7*

 

It still didnt fix it till I installed the CNTK samples.

(cntk-py36) C:\Users\Dell\Downloads\CNTK-master\CNTK-master\Examples\Image\TransferLearning>C:\Users\Dell\AppData\Local\Programs\Python\Python36\python TransferLearning.py

 Traceback (most recent call last):
 File "TransferLearning.py", line 200, in <module>
 try_set_default_device(gpu(0))
 File "C:\Users\Dell\AppData\Local\Programs\Python\Python36\lib\site-packages\cntk\internal\swig_helper.py", line 69, in wrapper
 result = f(*args, **kwds)
 File "C:\Users\Dell\AppData\Local\Programs\Python\Python36\lib\site-packages\cntk\device.py", line 96, in gpu
 return cntk_py.DeviceDescriptor.gpu_device(device_id)
 ValueError: Specified GPU device id (0) is invalid.

[CALL STACK]
 > CNTK::NDMask:: MaskedCount
 - CNTK::DeviceDescriptor:: GPUDevice
 - PyInit__cntk_py
 - PyCFunction_FastCallDict
 - PyObject_CallFunctionObjArgs
 - PyEval_EvalFrameDefault
 - Py_CheckFunctionResult
 - PyList_Size
 - PyEval_EvalFrameDefault
 - Py_CheckFunctionResult
 - PyObject_CallFunctionObjArgs
 - PyEval_EvalFrameDefault
 - Py_CheckFunctionResult
 - PyEval_EvalCodeEx
 - PyEval_EvalCode
 - PyArena_Free

 

This one Needs GPU hence the error

I commented out the following line in \CNTK-master\CNTK-master\Examples\Image\TransferLearning\Transferlearning.py to make it work on CPU. And then I realized that this will smoke my CPU :). And it was using GPU for a reason. Just search for GPU in the python code and comment it out.

#from cntk.device import try_set_default_device, gpu

# try_set_default_device(gpu(0))

Lets try another one

(cntk-py36) C:\Users\Dell\Downloads\CNTK-master\CNTK-master\Examples\Image\Classification\ConvNet\Python>C:\Users\Dell\AppData\Local\Programs\Python\Python36\python ConvNet_CIFAR10.py

 Selected CPU as the process wide default device.
 -------------------------------------------------------------------
 Build info:

Built time: Jan 31 2018 14:48:31
 Last modified date: Tue Jan 23 11:59:52 2018
 Build type: Release
 Build target: CPU-only
 With 1bit-SGD: no
 With ASGD: yes
 Math lib: mkl
 Build Branch: HEAD
 Build SHA1: a70455c7abe76596853f8e6a77a4d6de1e3ba76e
 MPI distribution: Microsoft MPI
 MPI version: 7.0.12437.6
 -------------------------------------------------------------------
 Training 1195594 parameters in 14 parameter tensors.

Learning rate per 1 samples: 0.0015625
 Momentum per 64 samples: 0.9

…(off it goes)

YAY IT WORKS!!!

I am going to cover more CNTK samples in the next blog. Happy learning

AI next conference Jan. 2018 Seattle WA

February 6, 2018

The  NextCon conference series  began  early last year. The conference was organized by Association of Technology and Innovation (ATI) ‘s and Bill Liu was the lead.  This was in succession to the last one in Seattle in March 2017. The conference had  loads of companies which sponsored and had tech. talks. Roughly around 400 people attended the conference.

Conference Schedule:

sch1

 

sch2

 

Quick summary of recently concluded AI conference Nextcon in Seattle, Jan 2018.   This summary has my key takeaways and my learnings, Keynote summaries,  some of the break-away sessions I attended summaries, my side discussions as well as links and resources to the tech talks as well as links to blogs and Videos. It also includes some of my follow-up blog reading to understand  concepts. Hope this is useful for you as  much I found it to be!!

 

The conference had 4 tracks:

  1. Computer Vision
  2. Speech /NLP
  3. Data Science/Analytics
  4. Machine Learning

With limited time I picked mainly the Data Science and Machine learning tracks to understand trends on how to handle large amount of data and how to make sense of large amounts of data.

 

  1. Key takeaways & learnings for us:

These are some of things I have distilled and filtered from the Conference as areas of interest.

  1. AI is a great tool to have in your tool box. It isn’t the end all of all tools at least for now. But this could change in the future. For eg: AI which can disambiguate and find flaws in the electronic welds cant tell you whether a Kid is holding a tooth brush or base ball bat.
  2. Reinforcement learning(RL) is making a come back and yielding great results thought at a slightly higher cost of latency etc., time and infrastructure costs. Martin  Gomer  from google showed that how he trained a Pong playing AI with just historic data and by making it play itself and generating lot of data and get better at it. Think of how a kid learns to bike or learns to walk …ai_nvidiaReinforcement  learning (RL) and Neural networks Neural networks are algorithms, RL is a problem type. You can approach RL with neural networks.What makes RL very different from the others is that you typically don’t have a lot of data to start with, but you can generate a lot of data by playing. You have to deal with the problem that you have to make decisions, but it is not clear what is good (delayed reward). For example, it might take several moves until you know in Go if a move was smart.3. OS for AI – The next frontier will be when people will use each others algorithms and models to come up with a sophisticated service which aggregates. For eg: John peck from Algortigmia shows how. Someone writes a fruit classifier and another persona vegetable classifier and then a third party could aggregate them into a fruit or vegetable classifier.composability

elastic scale

Algorithmia maybe a good resource for paying for ML algorithms. I suggested to them after the talk to support  offering data as well at a $$.  This is inline with Data Science as a service idea

4. Auto ML or off-the-shelf machine learning methods – Machine learning is evolving at a pretty strong pace. More and more it is possible to just feed the AI platform dataset and it tunes the hyperparameters and comes up with a trained model

5. In the large scale of things AI is currently pretty early in its evolution

 

aiishere6. Future of AI from Prof. Oren Otzioni  – When will Superintelligence Arrive? AI experts try and answer  the question. It still is far out!!

7. Another interesting talk was by Twitter on Online ML and why they didn’t use deep learning. Deep learning currently has some disadvantages especially in real time low latency scenarios. More details on this below

8. Deep learning is providing lot of value however it comes at a cost as it requires a large data set. It however does require a solid hardware infrastructure.  Unfortunately, in deep learning, people usually see very sublinear speedups from many GPUs. Top performance thus requires top-of-the-line GPUs.

9. Microsoft AI platform is super rich in terms of tools, services, 3rd party tools integrations etc.

Microsoft demoed Azure ML workbench which seems like a really cool tool for the time consuming activity  of data wrangling.

 

2. Conference KeyNote summary:

  1. Steve Guggenheimer from Microsoft

Steve talked about the Microsoft AI platform and applications already on a lot of features.

 

Microsoft AI platform-

msftaiplatform

Microsoft demoed Azure ML workbench which seems like a really cool tool for the time consuming activity  of data wrangling.

azuremlworkbench

 

The platform is super rich in terms of tools, services, 3rd party tools integrations etc.

Ethics in AI

Microsoft realizes the potential of AI and how it can be misused and hence Steve shared the Microsoft AI ethics. Satya has talked about compassionate AI  as  the AI for the future

Microsoft has published a nice book on this subject  called “The Future Computed”

https://msblob.blob.core.windows.net/ncmedia/2018/01/The-Future_Computed_1.26.18.pdf

ai ethics msft

I also liked the live demo on how the Bing  team uses specialized FPGA’s. FPGA’s or Field Programmable Gate Arrays are programmable hardware devices sort of a CPU for specific task rather than general purpose which allows optimizations to be built in.

 

CPU vs FPGA performance within Bing team – FPGAs and ASIC derivatives just dedicated to a certain task perform really really well at the same time taking a fraction of the Power.

fpga

2. AI at DIdi Chuxing – Didi Chuxing is like the Uber of China and the scale they have to deal with is humongous.   I liked DIdi Chuxing’s Presentation on how they are using AI in the transportation sector.  Lot of it can be applied to other fields as well as the problems are similar in nature. They  presented the iterations  on how they solved their problems using various AI algorithms and have narrowed it down to deep learning and Reinforcement learning to look at forecast, ETA, dispute resolution etc.  They started with  regression models to Deep learning models. Deep learning has helped them solve more problems.

didi_ai

They have applied AI to multiple problems areas within transportation –

didi_projects

More details here:

https://www.slideshare.net/BillLiu31/ai-at-didi-by-jieping-ye

 3. UW Prof. Oren Etzioni also presented a good deck on Future of AI which is more like Is AI the evil power it is made out to be rather than typical technical trends of AI?

superinteligence

Winograd schemas is an alternative to the Turing Test developed by Hector Levesque.

The Turing Test is intended to serve as a test of whether a machine has achieved human-level intelligence. In one of its best-known versions , a person attempts to determine whether he or she is conversing (via text) with a human or a machine. However, it has been criticized as being inadequate. At its core, the Turing Test measures a human’s ability to judge deception: Can a machine fool a human into thinking that it too is human? It also suggests that the Turing Test may not be an ideal way to judge a machine’s intelligence.  An alternative is the Winograd Schema Challenge.

Rather than base the test on the sort of short free-form conversation suggested by the Turing Test, the Winograd Schema Challenge (WSC) poses a set of multiple-choice questions that have a particular form.  The test is dedicated to furthering and promoting research in the field of formal commonsense reasoning. For eg:

4. “Tensorflow and deep reinforcement learning without a PhD“ by Martin Gomer  from google.

He briefly alluded to Auto ML which learns the model architecture directly on the dataset of interest:

automl

Google deep mind teaching a virtual human to walk/jump etc I was an athlete and when I look at the image below – The stride and the arms are just what a good long jumper would use and would be proud off below 🙂

jump_rl

Demonstration of playing  pong without any specialized algorithms with deep reinforcement leaning and lots of data:

https://www.youtube.com/watch?v=YFe0YbaRIi8

 Link to the Video of his talk: – https://www.youtube.com/watch?v=aRKOJHRbXeo

Google deep mind taught itself to walk – https://www.youtube.com/watch?v=gn4nRCC9TwQ

 

5. Key Note – “Deep learning at amazon Alexa” by Nikko Strom from Amazon

This is very powerful as it shows how Alexa is using multi modality, device and personal context which Alexa is using. This is really very powerful and can really engage the user!!

3. Summary of Breakout sessions (I attended)

  1. ML track – Twitter – Parameter Server approach for online ML at Twitter

The talk basically discussed the evolution of parameters servers in Twitter which need to scale and have real time approaches to online ML. Their approach has been around load balancing,  filtering, centralized Parameter servers. They have tried deep learning but they found as of now Deep Learning is not working for them:

Some of the disadvantages of Deep learning:

  1. Latency for their usage is high
  2. Model quality not impacted much ROI
  3. New approaches in ML could remove displace deep learning

 

  1. ML track – Machine learning at scale by Amy Unruh from Google The talk showed that there is a gap in the Google ML offering and is addressed by Auto ML for vision. Also, compares the various techniques in terms of resources needed to solve an AI problem typically:

1)      Time

2)      Prediction Code

3)      Serving Infrastructure

4)      Model Code

5) Training data

gcp_spectrum

Resources needed to solve an AI problem  per Google

ml_resources

ML  as an API  – Mainly time and prediction code

ml_api

Custom code and model – More resource intense

custom build

Custom model with transfer of learning from another project  – It takes less time and can reuse model code and training data

transfer learning

Google has identified Gap in the continuum  from DIY ML to ML APIs

gcp_spectrum

They are trying to address it with Auto ML. This is currently limited to Vision API only.

cloud automl

Auto ML – Currently Google has it  only  for Vision API but allows for deep Neural networks  to be auto generated.

It allows savings on Model code etc, infrastructure etc

cloudautoml_vision

Only need to provide training data. It trains, deploys, creates a Neural network automatically.

automl_resuce

Under the hood is creating new Neural network layers automatically.

Content below came from Martin Gomer  Google keynote speakers with the talk  titled “Tensorflow and deep reinforcement learning without a PhD

automl

He briefly alluded to Auto ML which learns the model architecture directly on the dataset of interest

automl_dia

More details on Auto ML  here:

https://www.blog.google/topics/google-cloud/cloud-automl-making-ai-accessible-every-business/

https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html

 3. Deep multimodal intelligence by Xiaodong He from Microsoft –

Microsoft Research Xiaodong He described  the scene with natural language :

  1. Understanding the image’s content
  2. Reasoning relationships among objects & concepts
  3. Generate a story in natural language

However, true understanding of the world is much more challenging

multimodality

https://www.slideshare.net/BillLiu31/deep-multimodal-intelligence-by-xiaodong-he

There were quite a few other parallel talks  but time was limited 😦

4. Presentation Tid- Bits

Slight digression on a nice presentation tool I found some speakers use at the conference which compliments the laser pointer. It retails for  around $130

logitech-spotlight-main

https://www.logitech.com/en-us/product/spotlight-presentation-remote?crid=11

And is great for highlighting code etc.

mortin_phd_highlighting

5.  Links /References

Slides:

Videos:

Papers — Lots of good papers on KDD:

Blog:

Books:

 

 

 

 

Age of the everyday IoT entrepreneur is here and now ….

August 28, 2017

IoT is the new buzz word these days. Anyone who knows anything about Information technology is feeling the buzz. There have been a lot of products which are making all this happen for the everyday entrepreneur and also for the established companies. Access to electronics, access to code which makes it easy to interface with those, price of the components, easy access to amazing education through video tutorials, blog posts, meetup groups etc.

The feature of the everyday entrepreneur is here and now. If you can imagine it and even better put the ideas on paper to products on shelves then your time has come….

Bigger corporations have very few advantages over an everyday entrepreneur these days right from design, engineering, supply chain, distribution etc.

For anyone new to the space who has an idea we can help you get it to market by either partnering with you or consulting on the product…

The Hype of IoT and the areas of applications from an old Gartner article is clearly defined here:

Iot_

Source: Gartner (July 2014)

http://www.gartner.com/technology/reprints.do?id=1-27LJLAK&ct=150119&st=sb

 

Challenges:

The biggest challenges in IoT from a technology perspective are:

1)      Lightweight networking standards

2)      Securing the data

3)      Power Management in IoT devices

In the end I think if you are getting to the new space of IoT or have some great ideas it is time to put those into practice. Happy IoT’ing

From the trenches – Arduino compilation and link errors and fixes to those

August 28, 2017

Error 1 – Loader error of undefined reference
Arduino: 1.6.5 (Windows 8.1), Board: “Arduino/Genuino Mega or Mega 2560, ATmega2560 (Mega 2560)”

myrobot.cpp.o: In function `__static_initialization_and_destruction_0′:
C:\Program Files (x86)\Arduino/myrobot.ino:57: undefined reference to `Michelino::Robot::Robot()’
myrobot.cpp.o: In function `loop’:
C:\Program Files (x86)\Arduino/myrobot.ino:272: undefined reference to `Michelino::Robot::run()’
collect2.exe: error: ld returned 1 exit status
Error compiling.

Solution
The problem is that the cpp file should not have the word class in it and only the header file.

Original code :

namespace test
{
class Ctest
{
public:
Ctest() {};
void a() {};
};
};

test.h
#ifndef rocketbot_remote_control_h
#define rocketbot_remote_control_h
namespace test
{
class Ctest
{
public:
// Ctest();
void a();
};
};
#endif

main

#include “test.h”

using namespace test;
test::Ctest r;
//Ctest r;

void setup() {
// put your setup code here, to run once:

}

void loop() {
// put your main code here, to run repeatedly:
r.a();
}

Fixed Code
Test.cpp
#include “test.h”
namespace test
{
//void Ctest::Ctest() {};
void Ctest::a() {};
};

Test.h
#ifndef rocketbot_remote_control_h
#define rocketbot_remote_control_h
namespace test
{
class Ctest
{
public:
// Ctest();
void a();
};
};
#endif

Main
#include “test.h”

using namespace test;
test::Ctest r;
//Ctest r;

void setup() {
// put your setup code here, to run once:

}

void loop() {
// put your main code here, to run repeatedly:
r.a();
}

Error 2 – Function not found

Arduino: 1.6.5 (Windows 8.1), Board: “Arduino/Genuino Mega or Mega 2560, ATmega2560 (Mega 2560)”

In file included from robot_vehicle_with_bth.h:61:0,
from myrobot.ino:55:
rocketbot_remote_control.h:11: error: no matching function for call to ‘SoftwareSerial::SoftwareSerial()’
SoftwareSerial BTSerial;
^
rocketbot_remote_control.h:11:16: note: candidates are:
In file included from robot_vehicle_with_bth.h:34:0,
from myrobot.ino:55:
C:\Program Files (x86)\Arduino\hardware\arduino\avr\libraries\SoftwareSerial/SoftwareSerial.h:90:3: note: SoftwareSerial::SoftwareSerial(uint8_t, uint8_t, bool)
SoftwareSerial(uint8_t receivePin, uint8_t transmitPin, bool inverse_logic = false);
^
C:\Program Files (x86)\Arduino\hardware\arduino\avr\libraries\SoftwareSerial/SoftwareSerial.h:90:3: note: candidate expects 3 arguments, 0 provided
C:\Program Files (x86)\Arduino\hardware\arduino\avr\libraries\SoftwareSerial/SoftwareSerial.h:47:7: note: SoftwareSerial::SoftwareSerial(const SoftwareSerial&)
class SoftwareSerial : public Stream
^
C:\Program Files (x86)\Arduino\hardware\arduino\avr\libraries\SoftwareSerial/SoftwareSerial.h:47:7: note: candidate expects 1 argument, 0 provided
no matching function for call to ‘SoftwareSerial::SoftwareSerial()’
Solution
Add extern keyword in header file
extern SoftwareSerial BTSerial;

Error 3 — redefinition of ‘class Michelino::RemoteControl’
Arduino: 1.6.5 (Windows 8.1), Board: “Arduino/Genuino Mega or Mega 2560, ATmega2560 (Mega 2560)”

In file included from myrobot.ino:56:0:
rocketbot_remote_control.h:14: error: redefinition of ‘class Michelino::RemoteControl’
class RemoteControl : public RemoteControlDriver
^
In file included from robot_vehicle_with_bth.h:61:0,
from myrobot.ino:55:
rocketbot_remote_control.h:14: error: previous definition of ‘class Michelino::RemoteControl’
class RemoteControl : public RemoteControlDriver
^
redefinition of ‘class Michelino::RemoteControl’

Solution :
Fix to add the following since header file gets included multiple times
#ifndef rocketbot_remote_control_h
#define rocketbot_remote_control_h
….
#endif

Error 4
x.cpp file
robot_vehicle_with_bth.cpp:77: error: ‘Robot’ has not been declared
void Robot::Robot()

Solution
Solution is to include x.h file
#include “x.h”

Stay tuned more fun ones coming!

How to Kernel Debug Connected Standby/Modern Standby systems?

October 24, 2016

Premise:

Debugging a Modern Standby (Connected Standby earlier) scenario can be challenging as there are some smaller subtle things to keep in mind. Most modern standby/connected Standby systems are newer systems with USB 3.0 xHCI controllers so this blog post only focuses on systems which support USB 3.0 debugging.

What you need:

  1. USB 3 cable – http://www.datapro.net/products/usb-3-0-super-speed-a-a-debugging-cable.html
  2. USB Type C to type A adapter – Needed only if the device doesn’t have a USB Type A port
  3. Windbg Bits – Many sources including the Kits —  WDK or ADK

Methodology to setup Kernel Mode debugging

  1. Setup the machine for USB 3.0 debug  as mentioned here: https://msdn.microsoft.com/en-us/library/windows/hardware/hh439372(v=vs.85).aspx
  2. Make sure you Disable Secure Boot in the BIOS menu
  3. Hook up the cable as follows  setup
  4. Check the USB device hierarchy and turn  off all the components. You can do this from device manager, usb tools usb_hierarcy
  5. Disable Turning off  USB stack components – Hubs and controllers on target Disable Powersaving on USb controller For the uSB hub, Uncheck the box to allow the computer to turn off the debice to save power  usb_hub_power
  6. Disable Powersaving on USB HUB/s — For the uSB xHCXI controller, Uncheck the box to allow the computer to turn off the device to save power usb_hub_power
  7. If there are multiple controllers or Hubs make sure you pick the right one where you plan to debug . Also if there is another level of hub in between do the same for that as well.
  8. Debug away!!