Since the start of the pandemic I’ve been working at home for going on 6 months now and streaming my work self over Zoom many hours a day. My work machine is a 2017 MacBook Pro and my home monitor is an old Apple Cinema display so I already have two built-in cameras I can use for streaming. That said, the cinema display is 10 years old and the camera quality is about that of a lowly cell phone camera from the same era, or to put it bluntly it’s terrible. Now the MBP has a decent laptop camera however, it’s not my primary monitor and it’s hard to stare at a secondary monitor meeting after meeting. I also happen to own a Sony a7ii DSLR and after finally getting tired of presenting myself through a terrible webcam I decided I’d investigate my option to use my DSLR as a webcam. Since then I’ve had numerous people comment on the quality of my camera and a few people schedule calls to learn more about my setup so I figured I’d document it.
Btw, none of these are affiliate links.
Sony α7 II E-mount Camera with Full Frame Sensor
The α7 II is the world’s first full-frame camera with 5-axis image stabilization and provides camera shake compensation for wide-ranging mountable lenses. Compact and refined for intuitive operation, it offers enhanced Fast Hybrid AF that delivers lightning-fast focusing, super-wide coverage, and exceptionally effective tracking of fast-moving subjects.
While this seems straightforward enough it actually pulling it all together took more than I expected. First, Sony DSLR’s are famous for eating batteries so I needed to find a power source that could last all day. I did a lot of Googling and found a number of DIY solutions and abandoned Sony adapter. I finally found this dummy battery adapter which has worked great:
F1T-Power ACPW20 AC Power Supply Adapter + Dummy Battery Charger kit Replace NP-FW50 Battery
Buy F1T-Power ACPW20 AC Power Supply Adapter + Dummy Battery Charger kit Replce NP-FW50 Battery for Sony A7000 A6500 A6400 A6300 A6100 A6000 A5100 A5000 A7 A7II A7RII A7SII A7S A7R A35 A37 A55 RX10 Camera: Adapters – Amazon.com ✓ FREE DELIVERY possible on eligible purchases
Next was finding an inexpensive HDMI capture card because I wasn’t entirely sure I’d stick with this setup. I opted for a $119 off brand card from digitnow.us which I’d never heard of but the card did get some good reviews on YouTube and I’ve had no problems, “it just worked”.
BR139 DIGITNOW USB 3.0 Capture Dongle Adapter Card,HDMI To USB 3.0 Live Streaming Device
Expert of Video Capture/Audio Capture/Scanner/Android TV and Wifi,BR139 DIGITNOW USB 3.0 Capture Dongle Adapter Card,HDMI To USB 3.0 Live Streaming Game Capture Device for PS4 Xbox One 360, Full HD 1080p 60FPS,Drive-Free Compatible with Linux /Mac OS/ windows10/7/xp
I also had to purchase a micro-HDMI cable and that and there are a million and one of these out there so I found one on Amazon that got good reviews and again, no problems so far.
CBUS 10ft HDMI to Micro HDMI Cable
CBUS 10ft HDMI to Micro HDMI Cable for GoPro Hero, Canon EOS M50 M5 M6 M100 Panasonic LUMIX ZS200 G85 GX850 FZ80 ZS70 LX10 GX85 FZ300 G7 GX8 GX9 ZS100, Sony a6400 a6300 RX0 a9, a7 III, a7R III, a7S II: Home Audio & Theater
I figured if I was going to have high quality video I’d also better upgrade my microphone too so I bought a Yeti:
Blue – Yeti
Blue offers premium USB and XLR microphones, and audiophile headphones for recording, podcasting, gaming, streaming, YouTube, and more.
I have the camera mounted on my desk using my tripod and while a workable setup it’s not ideal but I recommend this tripod wholeheartedly.
Roadtrip Classic Carbon Fiber | MeFoto
C1350Q1B MEFOTO Roadtrip Carbon Fiber Travel Tripod Kit-Black. Support with Style!
As for software I most use the HDMI capture card directly but I’ve also installed OBS and played around with it but it does drive my laptop fans and consumes considerable CPU which hardly seems worth it. I’ve tried tweaking a number of settings to fix the issue but nothing really seems to have helped. One thing that’s pretty cool but I’ve never actually used in a call is a lightboard that I setup using excalidraw and a Luma Key filter as my drawing surface. I’ve also experimented with the iPad app Concepts using Reflector to mirror the iPad onto my desktop and feed it into OBS.
Open Broadcaster Software®️ | OBS
OBS (Open Broadcaster Software) is free and open source software for video recording and live streaming. Stream to Twitch, YouTube and many other providers or record your own videos with high quality H264 / AAC encoding.
If you’re on a Mac you’ll also need the virtual webcam support too:
johnboiles/obs-mac-virtualcam
Creates a virtual webcam device from the output of OBS. Especially useful for streaming smooth, composited video into Zoom, Hangouts, Jitsi etc. Like CatxFish/obs-virtualcam but for macOS. – johnbo…
Lastly, I’m a Wired fan and found this article helpful as well.
How to Turn Your Photography Camera into a Webcam
Months into the pandemic, webcams are still hard to find. But if you’re a shutterbug, you already have a better option.
I also would recommend installing Sony’s Imaging Edge software so you can use a mini-USB cable and control the camera directly from your desktop which is far easier than messing with it when it’s mounted to the back side of your monitor. Just be sure to set your USB Connection setting to PC Remote.
I’ve been exploring the Apollo stack for developing with GraphQL and find the documentation a bit outdated so I decided to make some notes for myself and start collecting them here. The first thing I wanted to do is experiment with the apollo client codegen for TypeScript and understand how this tool works and leverage it for creating a TypeScript Apollo client. I started by using this Starwars sample Apollo server so I could focus on the client-side code gen which was quick and easy to stand up.
$ git clone https://github.com/apollographql/starwars-server.git
...
$ cd starwars-server
$ yarn && yarn start
yarn run v1.15.2
$ nodemon ./server.js --exec babel-node
[nodemon] 1.19.0
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: *.*
[nodemon] starting `babel-node ./server.js`
🚀 Server ready at http://localhost:8080/graphql
🚀 Subscriptions ready at ws://localhost:8080/websocket
Next, I tested a simple GraphQL query to make sure the server is working by browsing here:
http://localhost:8080/graphql
I installed the Apollo CLI and started experimenting with codegen. Unfortunately, as of this writing the CLI documentation is outdated and refers to apollo-codegen and the parameters and configuration appear to have changed. To play with newer apollo CLI and client-side codegen I created a new “project” folder and just wanted to get some code generated without any other project dependencies/files etc. So, I created a folder to get started:
$ mkdir starwars-client
$ cd starwars-client
Next, I ran the apollo CLI to download the server’s schema with –endpoint parameter pointing to the running instance of the starwars-server sample:
$ ➜ starwars-client apollo client:download-schema --endpoint=http://localhost:8080/graphql
⚠️ It looks like there are 0 files associated with this Apollo Project. This may be because you don't have any files yet, or your includes/excludes fields are configured incorrectly, and Apollo can't find your files. For help configuring Apollo projects, see this guide: https://bit.ly/2ByILPj
✔ Loading Apollo Project
✔ Saving schema to schema.json
$ ls
schema.json
$
As you can see, this created a schema.json file containing details from my starwars-server. The next step is generating TypeScript code for a single GraphQL query using the downloaded schema. For good measure I’ll include a few of the issues I ran into along the way as I didn’t fine a lot on Google related to the various error messages.
➜ starwars-client apollo client:codegen
› Error: Missing required flag:
› --target TARGET Type of code generator to use (swift | typescript | flow | scala)
› See more help with --help
Ok, so I’m missing –target, that’s easy enough to add…
➜ starwars-client apollo client:codegen --target typescript
Error: No schema provider was created, because the project type was unable to be resolved from your config. Please add either a client or service config. For more information, please refer to https://bit.ly/2ByILPj
at Object.schemaProviderFromConfig (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/node_modules/apollo-language-server/lib/providers/schema/index.js:29:11)
at new GraphQLProject (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/node_modules/apollo-language-server/lib/project/base.js:31:40)
at new GraphQLClientProject (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/node_modules/apollo-language-server/lib/project/client.js:33:9)
at Generate.createService (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/lib/Command.js:114:28)
at Generate.init (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/lib/Command.js:37:14)
➜ starwars-client
Again, unfortunately the bitly short link provided by the tool points back to the outdated apollo-codegen documentation which is inaccurate. So I added –localSchemaFile pointing to my newly downloaded schema.json:
➜ starwars-client apollo client:codegen --localSchemaFile=schema.json --target=typescript
⚠️ It looks like there are 0 files associated with this Apollo Project. This may be because you don't have any files yet, or your includes/excludes fields are configured incorrectly, and Apollo can't find your files. For help configuring Apollo projects, see this guide: https://bit.ly/2ByILPj
✔ Loading Apollo Project
✖ Generating query files with 'typescript' target
→ No operations or fragments found to generate code for.
Error: No operations or fragments found to generate code for.
at write (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/lib/commands/client/codegen.js:61:39)
at Task.task (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/lib/commands/client/codegen.js:86:46)
➜ starwars-client
What this error is actually saying is that the tool is expecting to find either .graphql or .ts files that have GraphQL “operations” aka queries, or mutations defined within my project folder which I haven’t created yet. Turns out there are a few options, 1) create .ts files with gql constants or 2) create a .graphql file(s) that contain named queries. I started with a simple query.graphql file for testing like this:
query {
heros(episode: NEWHOPE) {
name
}
}
I then ran the command again:
➜ starwars-client apollo client:codegen --localSchemaFile=schema.json --target=typescript
…and this yielded the same error as above because the CLI defaults to looking in ./src although you can change this using the –includes parameter. So I created the folder, moved the query.graphql file and re-ran the tool:
➜ starwars-client apollo client:codegen --localSchemaFile=schema.json --target=typescript
✔ Loading Apollo Project
✖ Generating query files with 'typescript' target
→ Apollo does not support anonymous operations
GraphQLError: Apollo does not support anonymous operations
Basically, this is telling me didn’t “name” the query so back to editing the query.graphql file and adding “heros”:
query heros {
hero(episode: NEWHOPE) {
name
}
}
Ok, now let’s try that again:
➜ starwars-client apollo client:codegen --localSchemaFile=schema.json --target=typescript
✔ Loading Apollo Project
✔ Generating query files with 'typescript' target - wrote 2 files
Success! I now have a few new folders and files added to my “project”:
In the above example I use command-line options although the apollo CLI supports a config file which looks like the following located in apollo.config.js which points to a remote schema from my starwars-server instance:
I currently manage the vSphere/VMware Cloud on AWS SDK team and I find that looking at the SDK’s page on VMware Code it can be a bit daunting to figure out exactly which SDK you might need. So, to clarify the landscape for vSphere specific SDKs a bit I thought I’d flesh out some of the unwritten details.
SDKs & Tools for Calling VMware SOAP API’s
As of this writing (June 2018) the bulk of VMware’s vSphere API’s are SOAP API’s which can be used from a variety of languages built using VMware’s WSDL including the Management SDK (for Java and .NET), the family of “vmomi” tools: Pyvmomi, Govmomi, rbvmomi and last but not least the Perl SDK.
Ok, on to REST APIs…
SDKs for VMware REST APIs
Prior to the release of vSphere 6.5 in 2016, VMware released a set of “vCloud Suite *” SDK’s for use with the Tagging and Content Library REST API’s. With the release of 6.5 VMware created a new set of SDKs named “vSphere Automation SDK for *” where “*” is a language like Java, Python or Ruby. These new SDK’s were released on Github and are available here.
VMware Cloud on AWS APIs
At VMworld in 2017 VMware announced the release of VMware Cloud on AWS (VMC) and with it a new set of API’s for managing this new IaaS environment. As part of this expansion we’ve since added support for these API’s to the vSphere Automation SDK’s to include language bindings for the VMC Console API’s as well as NSX-T Policy API’s.
So there you have it. Hopefully, this helps explain some of the links on the SDK page the VMware {code} website.
Here’s a simple example of calling the vSphere REST API using curl. These commands first authenticate to the API which creates a vmware-api-session-id cookie which is stored to cookie-jar.txt then makes a request to get a list of VMs:
I wanted to see if I could run an instance of RStudio’s Shiny server on Cloud 9 and after a bit of finagling with the right set of steps to get it set up I have an instance running. Here’s what I did starting from a workspace using the stock HTML template.
strefethen:~/workspace/ $ sudo sh -c 'echo "deb http://cran.rstudio.com/bin/linux/ubuntu trusty/" >> /etc/apt/sources.list
strefethen:~/workspace/ $ gpg --keyserver keyserver.ubuntu.com --recv-key E084DAB9
strefethen:~/workspace/ $ sudo apt-get update
strefethen:~/workspace/ $ sudo su - \
-c "R -e \"install.packages('shiny', repos='https://cran.rstudio.com/')\""
strefethen:~/workspace/ $ sudo su - -c "R -e \"install.packages('shiny', repos='https://cran.rstudio.com/')\""
strefethen:/etc/shiny-server $ sudo shiny-server
[2016-09-02 02:25:04.460] [INFO] shiny-server - Shiny Server v1.4.4.801 (Node.js v0.10.46)
[2016-09-02 02:25:04.463] [INFO] shiny-server - Using config file "/etc/shiny-server/shiny-server.conf"
[2016-09-02 02:25:04.535] [WARN] shiny-server - Running as root unnecessarily is a security risk! You could be running more securely as non-root.
[2016-09-02 02:25:04.539] [INFO] shiny-server - Starting listener on 0.0.0.0:8081
I need to look into the user “shiny” to see about fixing the above warning. Then edit /etc/shiny-server/shiny-server.conf and change the port from 3838 -> 8081 so Cloud 9 will server the content and start the server:
sudo shiny-server
Browse to http://<project-name>-<username>.c9users.io:8081/ and you should see:
I’m building a React Native application and recently updated to v0.31.0 and at first things were working well debugging on the device benefiting from a feature of the react-native-xcode.sh script. The script copies your Dev machine’s IP address to a text file called ip.txt which is used for establishing the connection to your machine from your device since localhost points to the wrong place. Here’s the line of code in ./node_modules/react-native/packager/react-native-xcode.sh:
In my previous post I wrote about the process that lead me to build a dashboard but first I want to talk a bit about the structure of the data in the Google Sheet where the whole process started. I first started by looking to quickly create a few charts to visualize some of our KPI‘s. To source the data I created a text file containing the SQL statements and used psql to fetch Postgres data which I dumped it to data into .CSV files for import into separate “data” only tabs in Google Sheets.
The first tab was the “primary” dataset which contained a wide (A to AX) set of columns with a blend of content from the various linked “data” tabs and is where I derived all of the pivot tables with a primary key in the first column and with this initial set of at I was able to start building charts to help visualize the data.
Of course, once you’ve answered one question it leads to follow-on questions which require more data leading to more questions. Before long I was querying a dozen tables from Postgres and MSSQL and importing the data into these “data” tabs. For data tabs with a 1-1 relationship based on primary key I would aggregate the data onto the main sheet with a formula like “Imported Data’!B4” or in cases where not all keys were present via a lookup like =IFERROR(VLOOKUP($A:$A,”Data Sheet”!$A:$E,3,FALSE),0) setting the result accordingly when the primary key wasn’t found.
Ultimately, flattening the data made it easy to construct pivot tables for aggregate totals, averages, counts, and median values etc. from which I could build a variety of charts a sampling of which I’ve included below.Here’s a small sample of the kinds of charts built from pivot tables. Yes, I’ve clipped/changed some of the legends knowingly obscuring the underlying meaning of the chart.
Here’s a small sample of the kinds of charts built from pivot tables. Yes, I’ve clipped/changed some of the legends knowingly obscuring the underlying meaning of the chart.
I built a variety of pivot tables for the Wanderful Marketing team (sans charts) for easy analysis of Cash Dash campaigns from a variety of angles such as by a given retailer by offer type, amount, reward, launch day of the week and a variety of campaign performance metrics that I’d calculated within the sheet. Ultimately, the usefulness of this data caught on and a number of teams were not only reviewing the data but asking for additional analysis and updates.
While I was able to automate some portions of updating this sheet, its associated tabs etc. Google Sheet’s charts and pivot tables don’t automatically expand as the size of your data grows which made it a laborious task to “re-scope” them as more data was added not to mention I knew the 2M cell limit was looming in the distance.
In a follow-on post I’ll talk about how I began the shift to automating this using R and a Shiny Dashboard running on an OSX Mac mini.
Working on building mobile apps for the last several years I thought I would publish a list of some of the things I’ve learned here in the mobile trenches. Without further adieu and in no particular order… Btw, welcome your feedback/additions. Continue reading Lessons Learned in a Mobile Startup→