All posts by Steve

Using a Sony a7ii DSLR Camera for video conferencing

Since the start of the pandemic I’ve been working at home for going on 6 months now and streaming my work self over Zoom many hours a day. My work machine is a 2017 MacBook Pro and my home monitor is an old Apple Cinema display so I already have two built-in cameras I can use for streaming. That said, the cinema display is 10 years old and the camera quality is about that of a lowly cell phone camera from the same era, or to put it bluntly it’s terrible. Now the MBP has a decent laptop camera however, it’s not my primary monitor and it’s hard to stare at a secondary monitor meeting after meeting. I also happen to own a Sony a7ii DSLR and after finally getting tired of presenting myself through a terrible webcam I decided I’d investigate my option to use my DSLR as a webcam. Since then I’ve had numerous people comment on the quality of my camera and a few people schedule calls to learn more about my setup so I figured I’d document it.

Btw, none of these are affiliate links.

While this seems straightforward enough it actually pulling it all together took more than I expected. First, Sony DSLR’s are famous for eating batteries so I needed to find a power source that could last all day. I did a lot of Googling and found a number of DIY solutions and abandoned Sony adapter. I finally found this dummy battery adapter which has worked great:

Next was finding an inexpensive HDMI capture card because I wasn’t entirely sure I’d stick with this setup. I opted for a $119 off brand card from digitnow.us which I’d never heard of but the card did get some good reviews on YouTube and I’ve had no problems, “it just worked”.

I also had to purchase a micro-HDMI cable and that and there are a million and one of these out there so I found one on Amazon that got good reviews and again, no problems so far.

I figured if I was going to have high quality video I’d also better upgrade my microphone too so I bought a Yeti:

I have the camera mounted on my desk using my tripod and while a workable setup it’s not ideal but I recommend this tripod wholeheartedly.

As for software I most use the HDMI capture card directly but I’ve also installed OBS and played around with it but it does drive my laptop fans and consumes considerable CPU which hardly seems worth it. I’ve tried tweaking a number of settings to fix the issue but nothing really seems to have helped. One thing that’s pretty cool but I’ve never actually used in a call is a lightboard that I setup using excalidraw and a Luma Key filter as my drawing surface. I’ve also experimented with the iPad app Concepts using Reflector to mirror the iPad onto my desktop and feed it into OBS.

If you’re on a Mac you’ll also need the virtual webcam support too:

Lastly, I’m a Wired fan and found this article helpful as well.

I also would recommend installing Sony’s Imaging Edge software so you can use a mini-USB cable and control the camera directly from your desktop which is far easier than messing with it when it’s mounted to the back side of your monitor. Just be sure to set your USB Connection setting to PC Remote.

Working with Apollo CLI

I’ve been exploring the Apollo stack for developing with GraphQL and find the documentation a bit outdated so I decided to make some notes for myself and start collecting them here. The first thing I wanted to do is experiment with the apollo client codegen for TypeScript and understand how this tool works and leverage it for creating a TypeScript Apollo client. I started by using this Starwars sample Apollo server so I could focus on the client-side code gen which was quick and easy to stand up.

$ git clone https://github.com/apollographql/starwars-server.git
...
$ cd starwars-server
$ yarn && yarn start
yarn run v1.15.2
$ nodemon ./server.js --exec babel-node
[nodemon] 1.19.0
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: *.*
[nodemon] starting `babel-node ./server.js`
🚀 Server ready at http://localhost:8080/graphql
🚀 Subscriptions ready at ws://localhost:8080/websocket

Next, I tested a simple GraphQL query to make sure the server is working by browsing here:

http://localhost:8080/graphql

I installed the Apollo CLI and started experimenting with codegen. Unfortunately, as of this writing the CLI documentation is outdated and refers to apollo-codegen and the parameters and configuration appear to have changed. To play with newer apollo CLI and client-side codegen I created a new “project” folder and just wanted to get some code generated without any other project dependencies/files etc. So, I created a folder to get started:

$ mkdir starwars-client
$ cd starwars-client

Next, I ran the apollo CLI to download the server’s schema with –endpoint parameter pointing to the running instance of the starwars-server sample:

$ ➜  starwars-client apollo client:download-schema --endpoint=http://localhost:8080/graphql
⚠️  It looks like there are 0 files associated with this Apollo Project. This may be because you don't have any files yet, or your includes/excludes fields are configured incorrectly, and Apollo can't find your files. For help configuring Apollo projects, see this guide: https://bit.ly/2ByILPj
  ✔ Loading Apollo Project
  ✔ Saving schema to schema.json
$ ls
schema.json
$

As you can see, this created a schema.json file containing details from my starwars-server. The next step is generating TypeScript code for a single GraphQL query using the downloaded schema. For good measure I’ll include a few of the issues I ran into along the way as I didn’t fine a lot on Google related to the various error messages.

➜  starwars-client apollo client:codegen
 ›   Error: Missing required flag:
 ›     --target TARGET  Type of code generator to use (swift | typescript | flow | scala)
 ›   See more help with --help

Ok, so I’m missing –target, that’s easy enough to add…

➜  starwars-client apollo client:codegen --target typescript
Error: No schema provider was created, because the project type was unable to be resolved from your config. Please add either a client or service config. For more information, please refer to https://bit.ly/2ByILPj
    at Object.schemaProviderFromConfig (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/node_modules/apollo-language-server/lib/providers/schema/index.js:29:11)
    at new GraphQLProject (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/node_modules/apollo-language-server/lib/project/base.js:31:40)
    at new GraphQLClientProject (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/node_modules/apollo-language-server/lib/project/client.js:33:9)
    at Generate.createService (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/lib/Command.js:114:28)
    at Generate.init (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/lib/Command.js:37:14)
➜  starwars-client

Again, unfortunately the bitly short link provided by the tool points back to the outdated apollo-codegen documentation which is inaccurate. So I added –localSchemaFile pointing to my newly downloaded schema.json:

➜  starwars-client apollo client:codegen --localSchemaFile=schema.json --target=typescript
⚠️  It looks like there are 0 files associated with this Apollo Project. This may be because you don't have any files yet, or your includes/excludes fields are configured incorrectly, and Apollo can't find your files. For help configuring Apollo projects, see this guide: https://bit.ly/2ByILPj
  ✔ Loading Apollo Project
  ✖ Generating query files with 'typescript' target
    → No operations or fragments found to generate code for.
Error: No operations or fragments found to generate code for.
    at write (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/lib/commands/client/codegen.js:61:39)
    at Task.task (~/.nvm/versions/node/v10.15.3/lib/node_modules/apollo/lib/commands/client/codegen.js:86:46)
➜  starwars-client

What this error is actually saying is that the tool is expecting to find either .graphql or .ts files that have GraphQL “operations” aka queries, or mutations defined within my project folder which I haven’t created yet. Turns out there are a few options, 1) create .ts files with gql constants or 2) create a .graphql file(s) that contain named queries. I started with a simple query.graphql file for testing like this:

query {
  heros(episode: NEWHOPE) {
    name
  }
}

I then ran the command again:

➜  starwars-client apollo client:codegen --localSchemaFile=schema.json --target=typescript

…and this yielded the same error as above because the CLI defaults to looking in ./src although you can change this using the –includes parameter. So I created the folder, moved the query.graphql file and re-ran the tool:

➜  starwars-client apollo client:codegen --localSchemaFile=schema.json --target=typescript
  ✔ Loading Apollo Project
  ✖ Generating query files with 'typescript' target
    → Apollo does not support anonymous operations
GraphQLError: Apollo does not support anonymous operations

Basically, this is telling me didn’t “name” the query so back to editing the query.graphql file and adding “heros”:

query heros {
  hero(episode: NEWHOPE) {
    name
  }
}

Ok, now let’s try that again:

➜  starwars-client apollo client:codegen --localSchemaFile=schema.json --target=typescript
  ✔ Loading Apollo Project
  ✔ Generating query files with 'typescript' target - wrote 2 files

Success! I now have a few new folders and files added to my “project”:

➜  starwars-client tree
.
|____schema.json
|______generated__
| |____globalTypes.ts
|____src
| |____query.graphql
| |______generated__
| | |____heros.ts

Btw, here’s an example .ts file with a gql constant declared that would also work to generate code:

const heros = gql`
query heros {
  hero(episode: NEWHOPE) {
    name
  }
}`;

In the above example I use command-line options although the apollo CLI supports a config file which looks like the following located in apollo.config.js which points to a remote schema from my starwars-server instance:

module.exports = {
    client: {
        service: {
            url: "http://localhost:8080/graphql"
        }
    }
}

Using the config file you can change the command-line as follows for pulling the schema:

➜  starwars-client apollo client:download-schema --config=apollo.client.js
  ✔ Loading Apollo Project
  ✔ Saving schema to schema.json

Next, I’ll start making use of the generated code and save that for another post.

VMware vSphere SDKs explained

I currently manage the vSphere/VMware Cloud on AWS SDK team and I find that looking at the SDK’s page on VMware Code it can be a bit daunting to figure out exactly which SDK you might need. So, to clarify the landscape for vSphere specific SDKs a bit I thought I’d flesh out some of the unwritten details.

SDKs & Tools for Calling VMware SOAP API’s

As of this writing (June 2018) the bulk of VMware’s vSphere API’s are SOAP API’s which can be used from a variety of languages built using VMware’s WSDL including the Management SDK (for Java and .NET), the family of  “vmomi” tools: Pyvmomi, Govmomi, rbvmomi and last but not least the Perl SDK.

Ok, on to REST APIs…

SDKs for VMware REST APIs

Prior to the release of vSphere 6.5 in 2016, VMware released a set of “vCloud Suite *” SDK’s for use with the Tagging and Content Library REST API’s. With the release of 6.5 VMware created a new set of SDKs named “vSphere Automation SDK for *” where “*” is a language like Java, Python or Ruby. These new SDK’s were released on Github and are available here.

VMware Cloud on AWS APIs

At VMworld in 2017 VMware announced the release of VMware Cloud on AWS (VMC) and with it a new set of API’s for managing this new IaaS environment. As part of this expansion we’ve since added support for these API’s to the vSphere Automation SDK’s to include language bindings for the VMC Console API’s as well as NSX-T Policy API’s.

So there you have it. Hopefully, this helps explain some of the links on the SDK page the VMware {code} website.

Please post a comment with any questions.

VMware Automation SDK for Python API Reference Documentation

Accessing VMware vcenter REST API Authentication from curl

Here’s a simple example of calling the vSphere REST API using curl. These commands first authenticate to the API which creates a vmware-api-session-id cookie which is stored to cookie-jar.txt then makes a request to get a list of VMs:

<br>
curl -k -i -u administrator@vsphere.local:password -X POST -c cookie-jar.txt https://vcenter/rest/com/vmware/cis/session<br>
curl -k -i -b cookie-jar.txt https://vcenter/rest/vcenter/vm<br>

NOTE: Use with caution as your credentials will likely be caught in your command line history!

Here’s a related post on vSphere SDKs.

Check out the new VMware developer.vmware.com portal!

Highway 17 Santa Cruz Mountains Traffic Commuter Resources

Resources for information on Highway 17 Traffic over the Santa Cruz mountains to/from Los Gatos.

Installing R and Shiny on Cloud 9

I wanted to see if I could run an instance of RStudio’s Shiny server on Cloud 9 and after a bit of finagling with the right set of steps to get it set up I have an instance running. Here’s what I did starting from a workspace using the stock HTML template.

strefethen:~/workspace/ $ sudo sh -c 'echo "deb http://cran.rstudio.com/bin/linux/ubuntu trusty/" &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; /etc/apt/sources.list
strefethen:~/workspace/ $ gpg --keyserver keyserver.ubuntu.com --recv-key E084DAB9
strefethen:~/workspace/ $ sudo apt-get update
strefethen:~/workspace/ $ sudo su - \
-c "R -e \"install.packages('shiny', repos='https://cran.rstudio.com/')\""
strefethen:~/workspace/ $ sudo su - -c "R -e \"install.packages('shiny', repos='https://cran.rstudio.com/')\""
strefethen:/etc/shiny-server $ sudo shiny-server
[2016-09-02 02:25:04.460] [INFO] shiny-server - Shiny Server v1.4.4.801 (Node.js v0.10.46)
[2016-09-02 02:25:04.463] [INFO] shiny-server - Using config file "/etc/shiny-server/shiny-server.conf"
[2016-09-02 02:25:04.535] [WARN] shiny-server - Running as root unnecessarily is a security risk! You could be running more securely as non-root.
[2016-09-02 02:25:04.539] [INFO] shiny-server - Starting listener on 0.0.0.0:8081

I need to look into the user “shiny” to see about fixing the above warning. Then edit /etc/shiny-server/shiny-server.conf and change the port from 3838 -> 8081 so Cloud 9 will server the content and start the server:

sudo shiny-server

Browse to http://<project-name>-<username>.c9users.io:8081/ and you should see:

Default Shiny app running on Cloud 9

React Native bundle loading failing on a physical device

I’m building a React Native application and recently updated to v0.31.0 and at first things were working well debugging on the device benefiting from a feature of the react-native-xcode.sh script. The script copies your Dev machine’s IP address to a text file called ip.txt which is used for establishing the connection to your machine from your device since localhost points to the wrong place. Here’s the line of code in ./node_modules/react-native/packager/react-native-xcode.sh:

echo "$IP.xip.io" &gt; "$DEST/ip.txt"

Continue reading React Native bundle loading failing on a physical device

Using Google Sheets, Pivot Tables and Charts as a Startup Dashboard – Part II

In my previous post I wrote about the process that lead me to build a dashboard but first I want to talk a bit about the structure of the data in the Google Sheet where the whole process started. I first started by looking to quickly create a few charts to visualize some of our KPI‘s. To source the data I created a text file containing the SQL statements and used psql to fetch Postgres data which I dumped it to data into .CSV files for import into separate “data” only tabs in Google Sheets.

psql -h pgserver -d mydb -U myuser -w -t -A -F $ '\t' -f ~/Campains.sql > campaign.csv

The first tab was the “primary” dataset which contained a wide (A to AX) set of columns with a blend of content from the various linked “data” tabs and is where I derived all of the pivot tables with a primary key in the first column and with this initial set of at I was able to start building charts to help visualize the data.

Of course, once you’ve answered one question it leads to follow-on questions which require more data leading to more questions. Before long I was querying a dozen tables from Postgres and MSSQL and importing the data into these “data” tabs. For data tabs with a 1-1 relationship based on primary key I would aggregate the data onto the main sheet with a formula like “Imported Data’!B4” or in cases where not all keys were present via a lookup like =IFERROR(VLOOKUP($A:$A,”Data Sheet”!$A:$E,3,FALSE),0) setting the result accordingly when the primary key wasn’t found.

Ultimately, flattening the data made it easy to construct pivot tables for aggregate totals, averages, counts, and median values etc. from which I could build a variety of charts a sampling of which I’ve included below.Here’s a small sample of the kinds of charts built from pivot tables. Yes, I’ve clipped/changed some of the legends knowingly obscuring the underlying meaning of the chart.

Here’s a small sample of the kinds of charts built from pivot tables. Yes, I’ve clipped/changed some of the legends knowingly obscuring the underlying meaning of the chart.

Monthly Totals

I built a variety of pivot tables for the Wanderful Marketing team (sans charts) for easy analysis of Cash Dash campaigns from a variety of angles such as by a given retailer by offer type, amount, reward, launch day of the week and a variety of campaign performance metrics that I’d calculated within the sheet. Ultimately, the usefulness of this data caught on and a number of teams were not only reviewing the data but asking for additional analysis and updates.

While I was able to automate some portions of updating this sheet, its associated tabs etc. Google Sheet’s charts and pivot tables don’t automatically expand as the size of your data grows which made it a laborious task to “re-scope” them as more data was added not to mention I knew the 2M cell limit was looming in the distance.

In a follow-on post I’ll talk about how I began the shift to automating this using R and a Shiny Dashboard running on an OSX Mac mini.