Posts
Over the past 9 months, I’ve been rewriting my models from the ground up and open sourcing them on GitHub. The code is now fully public and available for anyone to use and modify.
The majority of the code is a BSD-3-Clause license which matches other open source projects such as PyTorch. There are a few pieces with modules borrowed from other projects.
This is a follow up to 3D Dynamic Objects and is part of a series where I try to train models to perform common self driving tasks from scratch.
This is a follow up to 3D Semantic Segmentation and is part of a series where I try to train models to perform common self driving tasks from scratch.
I decided to switch areas of focus for this new model. Previously I had been working entirely with dense models which output dense representations about the world such as the voxel occupancy grids and the BEV semantic maps for lane lines and drivable space.
This is a follow up to Voxel from Multicam and is part of a series where I try to train models to perform common self driving tasks from scratch.
I’ve previously put together occupancy models for self driving but that’s only one specific perception task.
Another common driving task is semantic segmentation. Semantic segmentation takes in the image and for every pixel predicts a specific class. This can be used to tell things like walls apart from cars or classify different types of lane lines and curbs on a road.
This is a follow up to Monocular Depth Improvements and is part of a series where I try to train models to perform common self driving tasks from scratch.
Background I spent a couple of months optimizing single camera (monocular) depth models before realizing that maybe there’s a better way. One of the biggest improvements I made to the monocular models was adding a 3D geometric constraint to enforce that the model didn’t predict depths below the ground.
This is a follow up to DIY Self Driving.
In the past few months I’ve been iterating on my previous work on creating self driving models. The main goals were initially:
train depth models for each camera generate joint point clouds from the multiple cameras use the fused outputs to create a high quality reconstruction that I can use to label things like lane lines This post lists all the various problems I ran into and some of the mitigations I applied for those issues.
This work was done in collaboration with green, Sherman and Sid.
During the holidays I decided to take some time and try to make my own self driving machine learning models as part of a pipe dream to have an open source self driving car. I hacked this together over the course of about 2 weeks in between holiday activities.
Disclaimer 1: I’m a software engineer on PyTorch but this work was done on my own time and not part of my normal duties.
This is a follow up to Hacking my Tesla Model 3 - Internal API.
As part of reverse engineering the Tesla Model 3 internals, I’ve been running a subset of the CID car services to see how they work.
The car computers are using Intel Atom based processors so it’s easy to setup a chroot to launch the services.
I’ve written two helper scripts to set up the car environment:
This is a follow up to Hacking my Tesla Model 3 - Security Overview.
This is a technical description of all the internal services I’ve found and notes about how they work.
All of these services described are normally unaccessible due to seceth and firewall rules.
Hosts 192.168.90.100 cid ice 192.168.90.100 ic 192.168.90.102 gw 192.168.90.103 ap ape 192.168.90.104 lb 192.168.90.105 ap-b ape-b 192.168.90.30 tuner 192.168.90.60 modem Tuner isn’t present on newer Model 3s as the AM/FM radio has been removed.
See the follow up at Hacking my Tesla Model 3 - Internal API.
I recently got a Tesla Model 3 and since I’m a huge nerd I’ve been spending a lot of time poking at the systems and trying to reverse engineer/figure out how to root my car.
I work on Machine Learning infrastructure so I’d love to be able to take a deep look at how autopilot/FSD works under the hood and what it can actually do beyond what limited information the UI shows.
Edit 2018-09-20T15:42-07:00: Dropbike’s response to these issues
Edit 2018-09-19T19:38-07:00: Updated support comments to more accurately
reflect their response.
Note: These issues were responsible disclosed and have since been fixed. This
is my understanding of the issues to the best of my knowledge.
To give you a little bit of background, Dropbike is
a new bike sharing service that just launched at the University of British
Columbia as one of their first locations. They’re only about a year old and
based out of Toronto. The service is pretty simple, they have a bunch of bikes
with a cell connection and bluetooth low energy locks spread out all over
campus. You can use their app to find nearby bikes and unlock them. Overall, it
seems like a neat convenient service and I was super excited to have them on
campus.
As part of Luk.ai we need to be able to run Tensorflow within a secure environment since a running Tensorflow model can do pretty much anything it wants to the host system.
For ease of deployment, we’d also like to be able to use Docker since it provides nice sandboxing support and ability to limit resources used by the container. We’d also like for the container to not be able to do anything other than run models.
I’ve been doing a bunch of work during my internship with Machine Learning models so I figured I take a crack at applying them to some of my personal projects. Just for fun I wanted to see what would happen if I tried to train a model on the registration, check-in and submission data for nwHacks.
I decided to use Hector, a suite of algorithms completely written in Go since that’s what most of the nwHacks tooling is written in.
A couple of my friends got food poisoning eating at places in the village in the
past month or so. I decided to do some digging and find out which places have
the best food safety records. To my horror, pretty much every place has food
safety violations on campus.
With nwHacks 2017 coming up this weekend I figured it
would be a good time to do a writeup of the tech stack and all the different
components that are used to make the hackathon a success. This covers all of the
different components of the stack and what technologies were used.
This contains a list of all the probabilities you might be interested in when
playing the Shadow Hunters board
game.
Yesterday, I decided to take a shot at rewriting the University of British Columbia’s Technical Career Fair (UBC TCF) website in Hugo. The TCF is one of the many events that the UBC Computer Science Student Society puts on every year and there’s been a day-of website for a number of years to allow companies to find their booths and students to find out about the companies.
The old site was written in a combination of Django and Python and had a small admin interface.
I’ve been doing a bunch of work on this site. This is a test page for all the different visual elements.
Block Quote Example Hey there! This is some example text that I needed to add to correctly get this line to wrap. How do you guys feel about the color blue?
Syntax Highlighting Example Here’s an example “Hello World!” Go program. I tend to prefer log over fmt when it comes to printing things to the screen.
This is my weekly calendar. I’m typically available any time Monday to Friday
9:00 to 17:00 unless listed as busy below.
This is the new site. Pretty bare bones right now.
There’s a pretty awesome lightning storm going on outside.
I’ve done a complete rewrite of the site using Polymer. It was pretty quick to write and has some neat features such as this embedded blog backed by tumblr. You can see the old blog at http://blog.fn.lc .
It also does live fetches of my most popular GitHub projects.
Both of these are implemented by directly accessing the respective APIs using iron-ajax.
https://fn.lc/ficrecommend/
I launched this today as it’s become fairly polished under my own personal use.
The ranking algorithm is pretty simple but actually works fairly well.
Here’s how it works:
Gets a story in the form of a URL Look up all the users who have liked/favorited that story. Count all the favorited stories of those users. Display the top 50 stories by number of favorites. Source Code: https://github.com/d4l3k/ficrecommend
A library for doing diffs of arbitrary Golang structs.
https://github.com/d4l3k/messagediff\
I put this together because I wanted an easy way to display diffs during testing. It’s fairly similar to an internal library I used during my internship this summer.
It’s pretty basic but I’m planning on adding LCS support if I ever get around to it. It does have support for diffing non-exported fields using go-spew’s unsafe reflect modifications.
I just implemented i18n-js support in WebSync. This came around after realizing my localization support for the JavaScript front end was lacking.
The i18n-js library is super useful and integrates directly with Sprockets and I18n making it as easy as doing:
//= require i18n //= require i18n/translations // Some translation I18n.t('translate-me') However it’s designed for use with Rails and thus doesn’t play nicely with Sinatra and sinatra-asset-pipeline. While it loaded just fine, Sprockets couldn’t find the i18n javascript files.
I’ve been working on implementing search for documents. I’m not sure if I’m every going to implement search for body content, but I thought I should probably implement it for titles & users.
It turns out that PostgreSQL has pretty nice full text search support with lexemes. I’ve been following this article pretty closely:
http://blog.lostpropertyhq.com/postgres-full-text-search-is-good-enough/
The only issue I’ve encountered is that it doesn’t do direct text matching. For example if you have a title ‘Bananas are tasty!
I did a bunch of work on WebSyn.ca this weekend. Here’s a list of some things I did:
Make charts persistent Make a nicer interface for inserting them Adding a bunch of options such as type, titles (main, x, y), legend, smooth lines Chart Types: Line, Bar, Radar, Polar Area, Pie, Doughnut Updated require.js Make tables.js and charts.js use the require.js copy of WebSync. Shift a huge number of dependencies to load from bower (thus making updating easier).
I finally got some free time since my midterms are over, and I decided to work on WebSyn.ca. I fixed a couple of bugs such as fixing file export.
I also decided to update the overall visual style and update the format of the file list page. The previous style was pretty terrible and just an HTML table. The newer version is pretty much the same thing but looks a bit more like an actual file manager.
WebSyn.ca was down for ten minutes while migrating to a new server. Everything should be working now.
Here’s some stats from the sketchy URL shortener. I’m really surprised it got so much traffic.
http://fn.lc
I’m not terribly happy with it. It seems a bit bland and confusing. I’ll probably add some explanatory text to the top.
Content aside, the setup is kind of neat. It uses erb, scss, and vim to render the code into html.
“vim -f -n code.js +TOhtml +wq +q“
It’s interesting that you can use vim to modify/export files programmatically.
I also setup mina for deployment. It makes pushing a new version of the site as easy as running “mina deploy”.
I’ve implemented basic =eqn() support in WebSync. Right now it just executes some javascript if the text in the cell starts with =. I’ve also added in one helper function that returns the value of the cell in the format c("A1").
We’ll see how this goes. I’m extremely hesitant to allow people to run untrusted javascript code on people’s browsers. I might have to add in a “This document uses untrusted javascript, are you willing to accept any consequences?
I just added easy custom CSS on WebSync documents. It might not be for the best… However, it works quite well and like everything you can edit the css in one window and preview the changes on the other. :D
A future update might add some sort of local CSS so you can only customize things under the .content_well div. Right now you can style anything on the page.
Ok, there was a need for OPENSSL_cleanse() instead of bzero() to prevent supposedly smart compilers from optimizing memory cleanups away. Understood.
Ok, in case of an hypothetically super smart compiler, OPENSSL_cleanse() had to be convoluted enough for the compiler not to recognize that this was actually bzero() in disguise. Understood.
But then why there had been optimized assembler versions of OPENSSL_cleanse() is beyond me. Did someone not trust the C obfuscation?
Hi there!