I recently installed Ubuntu on one of the Pi’s are home and installed Podman – which I hadn’t heard of until recently and is a container engine, similar to docker but doesn’t have a daemon.
When trying to get a basic alpine test image running I got this error:
Error: error creating build container: short-name "python:3.7-alpine" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
This is because, shortnames it seems arent resolved by default – atleast not on the the Ubuntu (ARM) version. To fix this, the following needs to be added to the /etc/containers/registries.conf file:
unqualified-search-registries=["docker.io"]
And once you save, these trying podcam-compose up should work as expected.
If you are like me, and don’t really have your work saved in the “%USERPROFILE%” it gets annoying after a time, to keep changing the directory.
If there is one specific folder that you prefer, it is an easy configuration change in the profile setting – add a setting called “startingDirectory” and point it to the path you want.
For example, I have a root folder called “src” where most of the code I am working on sits, and that’s where I wanted to default the terminal to.
To get to the profile, you can either use the shortcut CTRL+, or from the dropdown in the title bar, click settings (see below). This will open the settings.json in your default editor.
In my case, I wanted the starting directory for all the shells, so I put it under “defaults” – you can choose different options for different shells, and then would have this in the appropriate shell’s settings and not the default block of course.
Below is what this looks like for me pointing this to “c:\src”. Also note, the escape characters need to be formatted properly to parse.
"defaults":
{
// Put settings here that you want to apply to all profiles.
"fontFace": "CaskaydiaCove NF",
"startingDirectory": "c:\\src",
},
Once you save the file, it should automatically reload the terminal. And if the json didn’t parse – because of a typo or a syntax error then you would see an error similar to the one shown below.
In this example, I set the starting folder as “c:\src”; instead of “c:\\src”.
One of the key advances in the latest version of Windows 10 (2004) is WSL2 (Windows Subsystem for Linux v2) – and whilst a version bump, it offers so much more. This allows us to run with near-native performance linux binaries (ELF64).
Before we get into the steps outlined to install WSL2, I also recommend installing Windows Terminal, and winget. Although not required, it does make it simpler to use and a better (dev) experience – especially when setting up a new workstation.
For WSL2 to work, you need to make sure you are on Windows 10 2004 Build 19041 or higher. If you don’t have this, run Windows update and see if that updates your OS. If that doesn’t offer a update, you could also try the Windows update assistant.
To get WSL2, whilst not complicated one needs to do the following steps, in this order – running the commands in an elevated prompt.
Enable the Windows Subsystem for Linux optional feature.
Run Windows update (and reboot again if there are updates)
Set WSL2 as your default option.
wsl --set-default-version 2
Install your Linux distro of choice. You can do this via Store, or via winget, such as Ubuntu using the following command.
winget install -e --id Canonical.Ubuntu
Note, when trying to set WSL2 as the default option above (Step 5) and you get a error 0x1bc, that most likely means you need to run Windows update and reboot.
And here is my running Ubuntu and updating it.
So, what’s the big deal? This is where it gets quite interesting and one simple example is the windows interoperability with Linux – allowing one to run linux commands from within a command prompt.
If you have butter fingers like me, and over time end up with a lot of old commands with typos in your Windows run box that get annoying – deleting them is a simple. All you need to do it remove the following registry key.
Now every time one plays with regedit, it can be dangerous – you can also save this commend as a .cmd file, and then run it with admin privileges – essentially does the same thing.
As my experimentation continues, I wanted to get Visual Studio Code installed on a mac, and wanted to use python as the language of choice – main reason for the mac is to understand and explore the #ML libraries, runtimes, and their support on a mac (both natively and in containers – docker).
Now, Microsoft has a very nice tutorial to get VSCode setup and running on a mac, including some basic configuration (e.g. touchbar support). But when it comes to getting python setup, and running, that is a different matter. Whilst the tutorial is good, it doesn’t actually work and errors out.
Below is the code that Microsoft outlines in the tutorial for python. It essentially is the HelloWorld using packages and is quite simple; but this will fail and won’t work.
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 20, 100) # Create a list of evenly-spaced numbers over the range
plt.plot(x, np.sin(x)) # Plot the sine of each x point
plt.show() # Display the plot
When you run this, you will see an error that is something like the one outlined below.
The main reason this fails is that one has to be a little more explicit with matplot (the library that we are trying to use). Matplot has this concept of backends, which essentially is the runtime dependencies needed to support various execution environments – including both interactive and non-interactive environments.
For matplot to work on a mac, the raster graphics c++ library that it uses is based on something called Anti-Grain Geometry (AGG). And for the library to render, we need to be explicit on which agg to use (there are multiple raster libraries).
In addition on a mac OS X there is a limitation when rendering in OSX windows (presently lacks blocking show() behavior when matplotlib is in non-interactive mode).
To get around this, we explicitly tell matplot to use the specific agg (“TkAgg in our case) and then it will all work. I have a updated code sample below, which adds more points, and also waits for the console input, so one can see what the output looks like.
import matplotlib
matplotlib.use("TkAgg")
from matplotlib import pyplot as plt
import numpy as np
def waitforuser():
input("Press enter to continue ...")
return
x = np.linspace(0, 50, 200) # Create a list of evenly-spaced numbers over the range
y = np.sin(x)
print(x)
waitforuser()
print(y)
waitforuser()
plt.plot(x,y)
plt.show()
And incase you are wondering what it looks like, below are a few screenshots showing the output.
I am writing this on a Microsoft Surface Book, running Ubuntu natively, and there isn’t any Windows option – I blew away, the Windows partition, and there isn’t any other OS on it.
Why, some of you might think? Well, why not. 🙂 For me the motive is two fold: one am a geek and love to hack what works and cannot work – how else will one learn? And two, explore and see which AI frameworks, tools, and runtimes works better on Linux natively
Well I must say, this experiment has been a pleasant surprise and much more successful that I originally thought of. Most of the things are working quite well on Surface with Ubuntu – including touch and pen (both seem like mouse clicks). As the screenshot below shows, Ubuntu is running quite nicely – including most of the features. There are a few things that quite don’t – I have them listed later in the post.
So much so, that Visual Studio code is running natively and whilst I haven’t had a chance to use it much (yet), that fact that it can even so much was something I wasn’t expecting without running some containers or VM’s or the likes.
So, how does one go about doing this? It is quite simple these days to be honest. Below are the steps I had followed. I do think the real magic is the hard work that JakeDay has done to get the kernel and firmware supported.
Disclaimer: My experience outlined here is related to the Surface Book – it can also run and be supported on other Surface devices, and the exact nature of what works or doesn’t work would be a little different.
Hardware – Have a USB keyboard and mouse handy just in case; and if you are on a Surface Pro or something with only one usb port, then a usb hub. And you of course would need a USB drive to boot Ubuntu off.
Disable Secure boot – without this getting the bootloader sequence would be challenging. If you aren’t sure how, then check out the instructions here to disable secure boot.
Delete / Shrink the windows partition – If you don’t care about Windows and have a copy of the license somewhere to get back you might want to just delete this. If you do want to shrink it (say this is your primary machine and you want to get back at some point, then goto Disk Management in Windows and resize the partition – keep this to at least 50 GB.
Ubuntu USB drive – if you don’t have one already, create a ubuntu bootable usb drive. You can get more instructions here. And if you are on Windows, I would recommend using Rufus.
Install Ubuntu – Boot off the usb drive you created, and before that make sure you have disabled secure boot. I would pick most of the default options for Ubuntu for now.
Patched Kernel – Once you have ubuntu running, I would recommend installing the patched kernel and headers that allows for Surface support. Steps for these are outlined below and need to be execute in a terminal.
Install Dependencies: sudo apt install git curl wget sed
Clone the repo: git clone https://github.com/jakeday/linux-surface.git ~/linux-surface
Change working directory: cd ~/linux-surface
Run setup: sudo sh setup.sh
Reboot on the patched kernel
Change boot kernel: Finally, after you have rebooted, the odds of Ubuntu booting off the ‘right’ kernel is quite slim and best to manually pick this. You can of course use the grub, or what I find better – install the grub customizer, and then choose the correct option as shown below. Once picked and you had hit save, you also need to run the following in a terminal to make these persist: sudo update-grub
And that is all to it for getting the base install and customization running.
If you are super curious on what that setup script does, the code is below (also listed on github). What is interesting to see the various hardware models supported.
LX_BASE=""
LX_VERSION=""
if [ -r /etc/os-release ]; then
. /etc/os-release
if [ $ID = arch ]; then
LX_BASE=$ID
elif [ $ID = ubuntu ]; then
LX_BASE=$ID
LX_VERSION=$VERSION_ID
elif [ ! -z "$UBUNTU_CODENAME" ] ; then
LX_BASE="ubuntu"
LX_VERSION=$VERSION_ID
else
LX_BASE=$ID
LX_VERSION=$VERSION
fi
else
echo "Could not identify your distro. Please open script and run commands manually."
exit
fi
SUR_MODEL="$(dmidecode | grep "Product Name" -m 1 | xargs | sed -e 's/Product Name: //g')"
SUR_SKU="$(dmidecode | grep "SKU Number" -m 1 | xargs | sed -e 's/SKU Number: //g')"
echo "\nRunning $LX_BASE version $LX_VERSION on a $SUR_MODEL.\n"
read -rp "Press enter if this is correct, or CTRL-C to cancel." cont;echo
echo "\nContinuing setup...\n"
echo "Coping the config files under root to where they belong...\n"
cp -Rb root/* /
echo "Making /lib/systemd/system-sleep/sleep executable...\n"
chmod a+x /lib/systemd/system-sleep/sleep
read -rp "Do you want to replace suspend with hibernate? (type yes or no) " usehibernate;echo
if [ "$usehibernate" = "yes" ]; then
if [ "$LX_BASE" = "ubuntu" ] && [ 1 -eq "$(echo "${LX_VERSION} >= 17.10" | bc)" ]; then
echo "Using Hibernate instead of Suspend...\n"
ln -sfb /lib/systemd/system/hibernate.target /etc/systemd/system/suspend.target && sudo ln -sfb /lib/systemd/system/systemd-hibernate.service /etc/systemd/system/systemd-suspend.service
else
echo "Using Hibernate instead of Suspend...\n"
ln -sfb /usr/lib/systemd/system/hibernate.target /etc/systemd/system/suspend.target && sudo ln -sfb /usr/lib/systemd/system/systemd-hibernate.service /etc/systemd/system/systemd-suspend.service
fi
else
echo "Not touching Suspend\n"
fi
read -rp "Do you want use the patched libwacom packages? (type yes or no) " uselibwacom;echo
if [ "$uselibwacom" = "yes" ]; then
echo "Installing patched libwacom packages..."
dpkg -i packages/libwacom/*.deb
apt-mark hold libwacom
else
echo "Not touching libwacom"
fi
if [ "$SUR_MODEL" = "Surface Pro 3" ]; then
echo "\nInstalling i915 firmware for Surface Pro 3...\n"
mkdir -p /lib/firmware/i915
unzip -o firmware/i915_firmware_bxt.zip -d /lib/firmware/i915/
fi
if [ "$SUR_MODEL" = "Surface Pro" ]; then
echo "\nInstalling IPTS firmware for Surface Pro 2017...\n"
mkdir -p /lib/firmware/intel/ipts
unzip -o firmware/ipts_firmware_v102.zip -d /lib/firmware/intel/ipts/
echo "\nInstalling i915 firmware for Surface Pro 2017...\n"
mkdir -p /lib/firmware/i915
unzip -o firmware/i915_firmware_kbl.zip -d /lib/firmware/i915/
fi
if [ "$SUR_MODEL" = "Surface Pro 4" ]; then
echo "\nInstalling IPTS firmware for Surface Pro 4...\n"
mkdir -p /lib/firmware/intel/ipts
unzip -o firmware/ipts_firmware_v78.zip -d /lib/firmware/intel/ipts/
echo "\nInstalling i915 firmware for Surface Pro 4...\n"
mkdir -p /lib/firmware/i915
unzip -o firmware/i915_firmware_skl.zip -d /lib/firmware/i915/
fi
if [ "$SUR_MODEL" = "Surface Pro 2017" ]; then
echo "\nInstalling IPTS firmware for Surface Pro 2017...\n"
mkdir -p /lib/firmware/intel/ipts
unzip -o firmware/ipts_firmware_v102.zip -d /lib/firmware/intel/ipts/
echo "\nInstalling i915 firmware for Surface Pro 2017...\n"
mkdir -p /lib/firmware/i915
unzip -o firmware/i915_firmware_kbl.zip -d /lib/firmware/i915/
fi
if [ "$SUR_MODEL" = "Surface Pro 6" ]; then
echo "\nInstalling IPTS firmware for Surface Pro 6...\n"
mkdir -p /lib/firmware/intel/ipts
unzip -o firmware/ipts_firmware_v102.zip -d /lib/firmware/intel/ipts/
echo "\nInstalling i915 firmware for Surface Pro 6...\n"
mkdir -p /lib/firmware/i915
unzip -o firmware/i915_firmware_kbl.zip -d /lib/firmware/i915/
fi
if [ "$SUR_MODEL" = "Surface Laptop" ]; then
echo "\nInstalling IPTS firmware for Surface Laptop...\n"
mkdir -p /lib/firmware/intel/ipts
unzip -o firmware/ipts_firmware_v79.zip -d /lib/firmware/intel/ipts/
echo "\nInstalling i915 firmware for Surface Laptop...\n"
mkdir -p /lib/firmware/i915
unzip -o firmware/i915_firmware_skl.zip -d /lib/firmware/i915/
fi
if [ "$SUR_MODEL" = "Surface Book" ]; then
echo "\nInstalling IPTS firmware for Surface Book...\n"
mkdir -p /lib/firmware/intel/ipts
unzip -o firmware/ipts_firmware_v76.zip -d /lib/firmware/intel/ipts/
echo "\nInstalling i915 firmware for Surface Book...\n"
mkdir -p /lib/firmware/i915
unzip -o firmware/i915_firmware_skl.zip -d /lib/firmware/i915/
fi
if [ "$SUR_MODEL" = "Surface Book 2" ]; then
echo "\nInstalling IPTS firmware for Surface Book 2...\n"
mkdir -p /lib/firmware/intel/ipts
if [ "$SUR_SKU" = "Surface_Book_1793" ]; then
unzip -o firmware/ipts_firmware_v101.zip -d /lib/firmware/intel/ipts/
else
unzip -o firmware/ipts_firmware_v137.zip -d /lib/firmware/intel/ipts/
fi
echo "\nInstalling i915 firmware for Surface Book 2...\n"
mkdir -p /lib/firmware/i915
unzip -o firmware/i915_firmware_kbl.zip -d /lib/firmware/i915/
echo "\nInstalling nvidia firmware for Surface Book 2...\n"
mkdir -p /lib/firmware/nvidia/gp108
unzip -o firmware/nvidia_firmware_gp108.zip -d /lib/firmware/nvidia/gp108/
fi
if [ "$SUR_MODEL" = "Surface Go" ]; then
echo "\nInstalling ath10k firmware for Surface Go...\n"
mkdir -p /lib/firmware/ath10k
unzip -o firmware/ath10k_firmware.zip -d /lib/firmware/ath10k/
fi
echo "Installing marvell firmware...\n"
mkdir -p /lib/firmware/mrvl/
unzip -o firmware/mrvl_firmware.zip -d /lib/firmware/mrvl/
read -rp "Do you want to set your clock to local time instead of UTC? This fixes issues when dual booting with Windows. (type yes or no) " uselocaltime;echo
if [ "$uselocaltime" = "yes" ]; then
echo "Setting clock to local time...\n"
timedatectl set-local-rtc 1
hwclock --systohc --localtime
else
echo "Not setting clock"
fi
read -rp "Do you want this script to download and install the latest kernel for you? (type yes or no) " autoinstallkernel;echo
if [ "$autoinstallkernel" = "yes" ]; then
echo "Downloading latest kernel...\n"
urls=$(curl --silent "https://api.github.com/repos/jakeday/linux-surface/releases/latest" | grep '"browser_download_url":' | sed -E 's/.*"([^"]+)".*/\1/')
resp=$(wget -P tmp $urls)
echo "Installing latest kernel...\n"
dpkg -i tmp/*.deb
rm -rf tmp
else
echo "Not downloading latest kernel"
fi
echo "\nAll done! Please reboot."
Lastly, below are the things not working for me – none of these are deal breakers but something to be aware of.
Cameras are not supported – either of the two.
Dedicated GPU (if you have one). This was a little bummed out as I got the dedicated GPU for some of the #MachineLearning experimentation, but then this whole thing is a different type of experimentation, so am OK.
Can control the volume using the speaker widget thing on the top right corner, but the volume buttons on top aren’t.
Sleep / Hibernation – It has some issues and for now I have sleep disabled but have hibernation setup.
Detaching the screen will immediately terminate everything and power off the machine (not a clean poweroff) – I am guessing it cannot transition between the two batteries of the base and the screen. However if already detached then it will work without any issues.
Some time ago, I talked about my Tesla Model 3 “keyfob” which essentially uses a Amazon IoT button to call some of Tesla API’s and “talk” to the car. This for me, is cool as it allows my daughter to unlock, and lock the car at home. And of course it is a bit geeky, and allowing one to play with more things. 🙂
Since publishing this, I was surprised how many of you ping me asking on details on how they can did this for themselves. Given the level of interest, I thought I will document this and outline the steps here. I do have to warn you, that this would be a little long – it entails getting a IoT Button configured, and then the code deployed. Before you get started, and if you aren’t techy, I would recommend to go through the post completely, so you get a sense of what is needed.
At a high level, below are the steps that you need to go through to get this working. And this might seem cumbersome and a lot but it is not that difficult. Also if you prefer you can follow the official AWS documentation online here.
Create a AWS Login (if you have a existing Amazon.com login, you can use the same one if you prefer)
Order a IoT Button
Register the IoT Button in the AWS Registry (this is done via the AWS console)
Create (and activate) a device certificate
Create a IoT security policy
Attach the IoT security policy (from the previous step) to the device certificate created earlier
Attach the IoT security policy (now with the associated certificate) to the IoT button
Configure the IoT button
Deploy some code – this is done via a server-less function (also called a Lambda function) – this is the code that gets executed
Open AWS home page and login with your amazon.com credentials. Of course if you don’t have a Amazon.com account, then you want to click in sign up on the top right corner, to get this started.
After I login, I see something similar to the screenshot below. Your exact view might differ a little.
I recommend to change the region to one closer to you. To do this, click on the region on the top right corner and choose a region that is physically closest to you. In the longer run this would help with latency issues between you clicking the button and the car responding. For example in my case, Oregon makes most sense.
Once you have a AWS account setup, login to the AWS IoT console or on the AWS page in the previous step, scroll down to IoT Core as shown in the screenshot below.
Step 3 – Register IoT Button
Next step would be to register your IoT button – which of course means you physically have the button with you. The best way to register is to follow the instructions here. I don’t see much sense in trying to replicate that here.
Note: If you are not very technical, or comfortable, it might be best to use either the “AWS IoT Button Dev” app which is available both on the Apple Store (for iOS) and Google play (for Android).
Once you have registered a button (it doesn’t matter what you call it) – it will show up similar to the screenshot below. I only have one device listed.
Step 4 – Create a Device Certificate
Next, we need to create and activate a certificate for the device. Without this, the button won’t work. The certificate (which is a X.509 certificate) protects the communication between the button and AWS.
For most people, the one-click certification creation that AWS has, is probably the way to go. To get to this, on the AWS IoT console, click on Secure and then choose Certificates on the left if not already selected as shown below. I already have a certificate that you can see in the screenshot below.
If you need to create a certificate, click on the Create button on the top right corner, and choose one of the options shown in the image below. In most cases you would want to use the One-click certificate option.
NOTE: Once you create a Certificate, you get three files (these are the keys) that you need to download and keep safe. The certificate itself can be downloaded anytime, but the private and the public keys CANNOT be retrieved again after you close this page. It is IMPORTANT that you download these and save them in a safe place.
Once you have these downloaded then click on Activate on the bottom. And you should see a different certificate number than what you are seeing here. And don’t worry I have long deleted what you are seeing on this screen. 🙂
You can also see these in the developer guide on AWS documentation.
Step 5 – Create a IoT Security Policy
Next step is go back to the AWS IoT Console page and click on Policies under Security. This is used to create a IoT policy that you will need to attach to the certificate. Once you have a policy created, then it will look something like the screenshot below.
To create a policy, click on Create (or you might be prompted automatically if you don’t have one). On the create screen, in the Name you can enter anything that you prefer. I would suggest naming this something that you can remember and differentiate if you will have more than one button. In my case I named it as the same thing as my device.
In the policy statements for Action enter “iot:Connect” – without the quotes, but this is case sensitive so make sure you match is exactly.
For the Resource ARN enter “*” (again without the quotes) as shown below.
And finally for the effect, make sure “Allow” is checked.
And click on Create at the bottom.
After this is created this you will see the policies listed as shown below. You can see the new one we just created with “WhateverNameYouWillRecognize“. You can also see these and more details on the developer documentation – Create a AWS IoT Policy.
Step 6 – Attach a IoT Policy
Next step is to attach the policy that is just created to the certificate created earlier. To do that, click on Secure and Certificates on the left, and then click on the three dots (called ellipses) on the top right of the Certificate you created earlier. From the new menu that you get, choose “Attach Policy” as shown below.
From the resulting menu, select the policy that you had created earlier and select Attach. Using a sensible name that you would recognize would be helpful. You can also see these details on the developer documentation.
Step 7 – Attach Certificate to IoT Device
Next step is to attach the certificate to the IoT device (or thing). A device must have a certificate, a private key and a root CA certificate to authenticate with AWS. Amazon also recommends to attach a device certificate to the device – this probably isn’t helpful right now, but might be in the future if you start playing with this more.
To do this, select the certificate under Security on the left, and same as the previous step, by click on the three dots on the top right corner, select “Attach thing”.
And from the next screen select the IoT button that you registered earlier, and select “Attach”.
Step 8 – Configure IoT Button
To validate that everything is setup correctly – the certificate needs to be associated with a policy, and a thing (the IoT button in our case). So on the Certificates menu on the left, select your certificate by clicking on it (not the three dots this time – but rather the name). You will see a new screen that shows the details of the certificate as shown below.
And on the new menu on the
left, if you click on Policies you should see the policy you created, and
the Things should have the IoT button you created earlier.
Once all of this is done the next step is to configure the device. You can see more detailed steps on this on the developer guide here.
KEY TIP: The documentation doesn’t make it too obvious, but as part of configuring – the device (IoT Button) will become an access point that you will need to connect to and upload the certificates and key you created earlier. You cannot do this from a phone and it is best done from a desktop/laptop that has wifi network. Whilst these days all laptops will have a wifi network card, that isn’t necessarily true for desktops. So use a machine which has a wifi that you can temporarily connect to the access point that the IoT device creates.
Note this is only needed for getting the device configured to authenticate for AWS, and get on your Wifi network; once that is done you don’t need to do this.
At last we are starting to get the interesting part – a lot of what we were doing until now, was getting the button configured and ready.
Now that you have a IoT button configured and registered, the next step is to deploy some code. For this you need to setup a Lambda function using the AWS Lambda Console.
When you login, click on Create Function. On the Create function screen, choose the Blueprints option as shown below. You can see some of these in the developer documentation here.
Step 10 – Blueprint Search
On the Blueprints search box (which says Filters by tags), type in “button” (without quotes) and press enter. You should see an option called “iot-button-email” as shown below, select that and click configure on the bottom right corner.
Step 11 – Basic Information
On the next screen that says “Basic information”, enter the details as shown below. The names should be meaningful for you to remember. Roles can be reused across other areas, for now you can use a simple name something like “unlockCar” or “unlockCarSomeName” if you have more than one vehicle. The policy template should already be populated and you shouldn’t need to do anything else.
For the 2nd half – AWS IoT Trigger, select the IoT type as “IoT Button” and enter your device serial number as outlined in the screenshot below.
It won’t hurt to download these certificate and keys in addition to the ones created separately and save them in different folders. And for the Lambda function code, it doesn’t matter on the template code as we will be deleting it all. At this point that will be read-only and you won’t be able to modify anything – as shown in the screen shot below.
And finally scrolling down more, you will see the environment variables. Here is where you need to specify your Tesla credentials to it to be able to use create the token and call the Tesla API. For that you need the following two variables: TESLA_EMAIL and TESLA_PASS. These case sensitive so you need to enter them as is. And then finally click on Create function.
Step 12 – Code upload
Once you create a
function, you will see something like the screen below. In my case the function
is called “unlockSquirty” which is what you are seeing. This is
divided in to two parts – when on the Configuration page. The top part is the
designer that visually shows you what inputs are the triggers that execute the
function, and then what it outputs to on the right hand side. And below the designer is the editor where
one can edit the code inline or upload a zip file with the code.
In the function code
section, on the first drop down in the left (Code entry type) select upload a
.zip file.
And on the next screen upload the function package that you can download from here.
Make sure the Runtime is Node.js 8.10
Keep the Handler as the default.
Double check your Environment variable contain TESLA_EMAIL, and TESLA_PASS.
And scroll down and in the Basic settings, change the timeout to 1 minute. We run thus asynchronously and adding a little buffer would be better. You can leave all the other settings at their default. If your network might be iffy you can make this 2 mins.
Step 13 – Code Publish
Once you have entered all of this, click on Save on the top right corner and then publish new version. Finally once it is published you will be able to see the code show up as shown in the screenshot below.
Again, a single
click will unlock the car, a double-click would lock it, and a long press
(holding it for 2-3 seconds) would open the charge port door.
And here is the code:
var tjs = require('teslajs');
var username = process.env.TESLA_EMAIL;
var password = process.env.TESLA_PASS;
exports.handler = (event, context, callback) =>
{
tjs.loginAsync(username, password).done(function(result)
{
var token = JSON.stringify(result.authToken);
if (token)
console.log("Login Succesful!");
var options =
{
authToken: result.authToken
};
tjs.vehicleAsync(options).done(function(vehicle)
{
console.log("Vehicle " + vehicle.vin + " is: " + vehicle.state);
var options =
{
authToken: result.authToken,
vehicleID: vehicle.id_s
};
if(event.clickType == "SINGLE")
{
console.log("Single click, attempting to UNLOCK");
tjs.doorUnlockAsync(options).done(function(unlockResult)
{
console.log("Doors are now UNLOCKED");
});
}
else if(event.clickType == "DOUBLE")
{
console.log("Double click, attempting to LOCK");
tjs.doorLockAsync(options).done(function(lockResults) {
console.log("Doors are now LOCKED");
});
}
else if(event.clickType == "LONG")
{
console.log("Long click, attempting to CHARGE PORT");
tjs.openChargePortAsync(options).done(function(openResult) {
console.log("Charge port is now OPEN");
});
}
});
});
};
Often once hears are Lines of Code (LoC) as a metric. And for you to get a sense of what it means, below is a info-graphic that outlines some popular products, and services and the LoC that takes. Always interesting to get perspective – either appreciate some home grown system you are managing, or worried about a stinking pile you are going to inherit or build. 🙂
Inspired by a few folks on a few forums online, I took the liberty to extend their idea using a IoT Button, that acts as a simple “keyfob” for the Model 3.
The main goal was being to allow my daughter to lock and unlock the car at home. She is too young to have a phone, and without a more traditional fob, this gets a little annoying.
I extended the original idea, to understand the different presses (Single, Double, and Long press), and accordingly called the appropriate API to lock the car (single press – think of it as a single click), unlock on a double press, and open the charge port on a long press (when one presses and holds the button 2-3 secs).
For those who aren’t aware, the Amazon IoT buttons calls a Lambda function on AWS and plugging into that, one can extend this. The button needs to be connected, and online for this to work, and in my case, it is on the home wifi network.
Update: Many of you asked on how to set this up for yourself; I got around to blogging all the step on that; you can read those here.
If you have a Tesla, and are using (or wanting to use) 3rd party tools or data loggers, the one think they of course need is to authenticate your details with Tesla. A simple, but insecure way is to use your Tesla credentials – and surprisingly many people just happily share and use this.
I wasn’t comfortable doing this – after-all, they have access to your account where you can control a lot of things. Also, there are a few online tools that can generate the auth token, but again I wasn’t comfortable, as I did not know what they saved, or what they did not. 🙂
So, I wrote a simple Windows app that can allow you to generate a auth token that you can save. The application itself is simple. You enter your Tesla credentials, click on Generate Token and can save the generated token.
To test, if the generated token is working – click on the Test Token button. If everything is working as expected, you will see a list of vehicles that is associated with your account.
If you prefer to use the cURL script, click on the Generate cURL, will generate this and copy it to your clipboard. And it works across operating systems as you can see below (Windows, and Linux), but should also work on Mac.
I do intent to open source this, so folks can have a look at the code, and the Tesla REST APIs. Until then you can download the setup from here.
Leave a comment if you have any issues or any requests.
Update: v1.0.1 Published with minor updates. You can download from the same link above. This adds the revoke screen and some house keeping.
If you were trying to pull the latest source code on your Raspberry Pi for donkeycar, and get the following error, then probably your clock is off (and I guess some nonce is failing). This can happen if your pi had been powered off for a while (as in my case), and it’s clock is off (clock drift is a real thing) :).
fatal: unable to access 'https://github.com/wroscoe/donkey/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
To fix this, the following commands works. It seems the Raspberry Pi 3, by default has NTP disabled and this would enable it. I also had to check the result status with the second command, and force it with the third one.
sudo timedatectl set-ntp True
timedatectl status
sudo timedatectl set-local-rtc true
And that should do it; you might need to reboot the pi just to get it back on and then you should be able to pull the code off git and deploy your autonomous car.
I still get goose bumps reading that article – but then I am a geek, if that wasn’t obvious. Whilst, grid computing with GFS, MapReduce, Hadoop, are still very much relevant and great (and most others still trying to use and understand it); Dynamo (from Amazon) and BigTable lead to NoSQL which is great and still worth spending a lot of time learning, playing, and, experimenting – I would love to hear on what they are doing now with Colossus (think of that as GFS vNext), Caffeine and, Spanner.
7 years is an eternity and who knows what is cooking? And of course what are both Microsoft and Amazon doing to compete around this. How can you not continue to be excited the world we are living in? 🙂
I have talked to a few folks recently, and they still don’t believe bash on Windows (RS1) is ‘real’ and think it some kind of a VM. No it is not. It is the ‘real’ user mode running on Windows. It is not Cygwin, and it is not a VM. It is essentially all of the user mode (I.e. Linux without the kernel).
The kernel in this case is a wrapper around the NT kernel that translates the Linux commands to Windows and then things run. As far as Linux is concerned, its the same code and doesn’t have any changes). Technically this is called Windows Subsystem for Linux (WSL).
On windows, this is installed in the user space; so each user get their own instance effectively which is isolated from the other users. Once you install it (and if you are still reading this, then you probably know how to install it), then this shows up under C:\Users\your-user-ID\AppData\Local\lxss. If you can’t find that folder, you can still type it and navigate to it. Below is a screen shot on what this looks like:
It is a little interesting and been mucking around this. Here is you can see the installation of gcc:
And here is the output of the CPU details:
root@localhost:/proc# cat cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 78
model name : Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz
stepping : 3
microcode : 0xffffffff
cpu MHz : 2808.000
cache size : 256 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 6
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm pni pclmulqdq est tm2 ssse3 fma cx16 xtpr sse4_1 sse4_2 movbe popcnt aes xsave osxsave avx f16c rdrand hypervisor
bogomips : 5616.00
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 78
model name : Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz
stepping : 3
microcode : 0xffffffff
cpu MHz : 2808.000
cache size : 256 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 6
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm pni pclmulqdq est tm2 ssse3 fma cx16 xtpr sse4_1 sse4_2 movbe popcnt aes xsave osxsave avx f16c rdrand hypervisor
bogomips : 5616.00
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 78
model name : Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz
stepping : 3
microcode : 0xffffffff
cpu MHz : 2808.000
cache size : 256 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 6
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm pni pclmulqdq est tm2 ssse3 fma cx16 xtpr sse4_1 sse4_2 movbe popcnt aes xsave osxsave avx f16c rdrand hypervisor
bogomips : 5616.00
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 78
model name : Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz
stepping : 3
microcode : 0xffffffff
cpu MHz : 2808.000
cache size : 256 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 6
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm pni pclmulqdq est tm2 ssse3 fma cx16 xtpr sse4_1 sse4_2 movbe popcnt aes xsave osxsave avx f16c rdrand hypervisor
bogomips : 5616.00
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
root@localhost:/proc#
All, in all a very interesting world. A few things to note:
This is still in beta, so there will be issues.
It is user mode and not server mode. Live with it.
There would be path issues if you stray into the 256 character limit of Windows and then try and manipulate it in bash.
OK, now this is cool – not only is bash on fire, but I can miracast directly from Windows to another device.
The first video shows a few basic shell commands and then catches fire!
And this second video – essentially miracasts the same video you just saw on my TV without any special adapters. The TV is connected to the network and is showing a channel. Windows RS1 (“Anniversary Edition”) can find that on the network (from a Surface Book) and directly stream to that. The TV automatically switches over the input from cable and shows the video; and when I stop, it switches back to the cable input. Sweet. 🙂