Jekyll2024-02-10T12:31:25+01:00https://ramonh.dev/blog/feed.xmlDeveloper Relations, Technical Content and OSSDeveloper Relations, Technical Content and OSSRamón HuidobroHow Objective-C Made Me a Versatile Software Engineer2023-04-20T08:24:00+02:002023-04-20T08:24:00+02:00https://ramonh.dev/2023/04/20/objective-c-versatility<p><em>Special thanks to the incomparable <a href="https://twitter.com/SylwiaVargas">Sylwia
Vargas</a> for helping me structure and focus
this post better, as well as to <a href="https://chaos.social/@uliwitness">Uli</a> for making kind corrections.</em></p>
<p>Late last week, I posted on social media about going back to an old macOS
Objective-C codebase I’ve worked on for over a decade, prompting the question
“what is Objective-C used for”?</p>
<p>After nerding out in a thread about it, I realised there was so much to
Objective-C, the time it’s from, and why I’m so grateful I learned it early in
my career. That’s where this post comes in.</p>
<p>But to be clear, the purpose of this post is not to advocate for learning
Objective-C, but to appreciate the lessons I learned from it towards becoming a
more versatile engineer. I don’t write much Objective-C anymore, but it has a
special place in my heart.</p>
<h2 id="some-context-on-me-and-objective-c">Some context on me and Objective-C</h2>
<p>My first paid work as a software engineering freelancer was in fixing bugs for
existing macOS apps. This was back in 2009: the operating system was called Mac
OS X (I’ll be calling it macOS in this post to keep things consistent), its
apps were written in Objective-C, and Xcode, Apple’s software development, was
in version 3. Today? We have macOS with apps written in Swift, and Xcode is in
version 14.</p>
<p>It’s a language that, as the name might imply, is an object-oriented extension
of C. It has changed significantly over the years as the Apple development
environment and language standards have evolved.</p>
<p>While I would recommend choosing Swift for modern native Apple development,
looking at my time with Objective-C in hindsight helped me appreciate the
Computer Science fundamentals I learned from it that I find cropping up later
in my career. This post covers some of them.</p>
<h2 id="different-syntax-offers-different-perspectives">Different syntax offers different perspectives</h2>
<p>Remember when you first learned to code, and how things didn’t fully click at first? For example, to many new developers concepts like dot notation (for example: <code class="language-plaintext highlighter-rouge">user.email</code> or <code class="language-plaintext highlighter-rouge">button.setLabel("Reset")</code>) take a moment to settle in. Oftentimes, it doesn’t feel intuitive that dot notation is used for calling functions or retrieving data. This is something I’ve run into often when introducing folks to programming concepts.</p>
<p>Turns out that while Objective-C uses dot notation for C-like getters and
setters, most complex method calls will require square-bracket notation.</p>
<p>Let’s illustrate this with the following example. When coding a macOS app in Swift, if we wanted to get the title of the <a href="https://developer.apple.com/documentation/appkit/nswindow?language=objc">app’s main window</a>, we’d call the following:</p>
<div class="language-swift highlighter-rouge"><pre class="highlight"><code><span class="kt">NSWindow</span><span class="o">.</span><span class="n">mainWindow</span><span class="o">.</span><span class="n">title</span>
</code></pre>
</div>
<p>In Objective-C, each message sent (that is, function called or data retrieved) has to be done with square brackets:</p>
<div class="language-objective_c highlighter-rouge"><pre class="highlight"><code><span class="p">[[</span><span class="n">NSWindow</span> <span class="nf">mainWindow</span><span class="p">]</span> <span class="nf">title</span><span class="p">];</span>
</code></pre>
</div>
<p>Let’s look at a breakdown of each pair of square brackets:</p>
<ol>
<li><code class="language-plaintext highlighter-rouge">[NSWindow mainWindow]</code> sends the message mainWindow to the NSWindow class, which returns the application’s main window as an object.</li>
<li><code class="language-plaintext highlighter-rouge">[[NSWindow mainWindow] title]</code> sends the message title to the main window object, which returns the window’s title in the form of a string.</li>
</ol>
<p>What about when we want to call a method that takes a parameter? Let’s look at
the following example that sets the label of a button in a window:</p>
<div class="language-objective_c highlighter-rouge"><pre class="highlight"><code><span class="p">[[</span><span class="n">preferencesWindow</span> <span class="nf">resetButton</span><span class="p">]</span> <span class="nf">setLabel</span><span class="p">:</span> <span class="s">@"Reset"</span><span class="p">];</span>
</code></pre>
</div>
<p>So here’s a fun fact: Objective-C is based on a programming language called <a href="https://learnxinyminutes.com/docs/smalltalk/">Smalltalk</a>, which is particularly significant for laying the foundations for how we write object-oriented code today. Languages like Python, Ruby, Dart, Go, Java, Scala, and more have all been influenced by Smalltalk! Which yes, includes Objective-C.</p>
<p>While not common, square bracket syntax has a few advantages, such as being able to have a clearer separation of object and message when the dot notation is replaced with a whitespace. Furthermore, dot notation in Objective-C <a href="https://bignerdranch.com/blog/dot-notation-syntax/">adds a layer of ambiguity</a> in not being 100% clear whether a method is being called or a property is being accessed, whereas bracket notation always makes it clear that we’re sending a message.</p>
<p>Having this versatility for changing languages and embracing new syntaxes has helped me grasp concepts like <a href="https://thinkingelixir.com/course/code-flow/module-1/pipe-operator/">Elixir’s pipe operator</a> smoothly!</p>
<h2 id="pointers-on-objective-c-pointers">Pointers on Objective-C Pointers</h2>
<p>In Objective-C, a variable is a named memory location that can hold a value of a particular type, just like in other programming languages. A pointer, on the other hand, is a variable that stores the memory address of another variable.</p>
<h3 id="hold-on-a-memory-what">Hold on, a memory what?</h3>
<p>A memory address can be thought of as a bookshelf location in a library. Just like a book has a specific location on a shelf, a piece of data in a computer’s memory has a specific address, or location. The memory address serves as a unique identifier that can be used to locate and retrieve the data stored in memory. They’re usually represented in the hexadecimal format, such as <code class="language-plaintext highlighter-rouge">0x7fff5fbff8f8</code>. Just as a librarian needs to know the bookshelf location of a book in order to retrieve it, a program needs to know the memory address of data in order to access it.</p>
<p>Let’s have a look at how pointers are declared and used in Objective-C:</p>
<div class="language-objective_c highlighter-rouge"><pre class="highlight"><code><span class="c1">// Declare a variable 'myBook' and assign it a value of 42
</span><span class="kt">int</span> <span class="n">myBook</span> <span class="o">=</span> <span class="mi">42</span><span class="p">;</span>
<span class="c1">// Declare a pointer 'myBookmark' and assign it the memory address of 'myBook'
</span><span class="kt">int</span> <span class="o">*</span><span class="n">myBookmark</span> <span class="o">=</span> <span class="o">&</span><span class="n">myBook</span><span class="p">;</span>
<span class="c1">// Print the value of 'myBook'
</span><span class="n">NSLog</span><span class="p">(</span><span class="s">@"The value of myBook is %d"</span><span class="p">,</span> <span class="n">myBook</span><span class="p">);</span>
<span class="c1">// Print the memory address of 'myBook'
</span><span class="n">NSLog</span><span class="p">(</span><span class="s">@"The memory address of myBook is %p"</span><span class="p">,</span> <span class="o">&</span><span class="n">myBook</span><span class="p">);</span>
<span class="c1">// Print the value stored at the memory address pointed to by 'myBookmark'
</span><span class="n">NSLog</span><span class="p">(</span><span class="s">@"The value stored at the memory address pointed to by myBookmark is %d"</span><span class="p">,</span> <span class="o">*</span><span class="n">myBookmark</span><span class="p">);</span>
</code></pre>
</div>
<p>In Objective-C, <code class="language-plaintext highlighter-rouge">NSLog</code> prints to the console, and is the equivalent of <code class="language-plaintext highlighter-rouge">puts</code> in Ruby.</p>
<p>The output would look like this:</p>
<div class="language-plaintext highlighter-rouge"><pre class="highlight"><code>The value of myBook is 42
The memory address of myBook is 0x7ffee2d01a6c
The value stored at the memory address pointed to by myBookmark is 42
</code></pre>
</div>
<h3 id="why-use-pointers">Why Use Pointers?</h3>
<p>We use pointers in programming languages like Objective-C for several reasons:</p>
<ul>
<li>Efficient memory usage: Pointers allow us to pass data by reference, rather than by value. This means we can avoid copying large amounts of data when passing it between functions, leading to more efficient memory usage.</li>
<li>Data structures: Many data structures in Objective-C, such as linked lists and trees, rely on pointers to store and manipulate data.</li>
<li>Dynamic memory allocation: More on that shortly 👀</li>
</ul>
<p>And guess what, although a lot of higher-level languages don’t use them, modern ones certainly do, like Rust or Golang. I’ve found this to be helpful when understanding how <a href="https://wasmbyexample.dev/examples/webassembly-linear-memory/webassembly-linear-memory.rust.en-us.html">WebAssembly’s Linear Memory</a> works, for example.</p>
<h2 id="mastering-manual-memory-management">Mastering Manual Memory Management</h2>
<p>Nowadays, most programming languages have a built-in automatic <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Memory_management#garbage_collection">garbage collection mechanism</a>. Garbage collection is a way for a computer program to automatically clean up the memory that is no longer being used by the program. It helps to free up space in the computer’s memory so that it can be used for other things.</p>
<p>Objective-C did later on add their own mechanism for garbage collection, but not when I started.</p>
<p>This means that as a programmer, I am responsible for allocating memory for data structures or objects, and then releasing that memory when it is no longer needed. In programming languages like C, you would need to specify the number of bytes you needed to create a variable containing a <code class="language-plaintext highlighter-rouge">string</code>. In Objective-C, this was done dynamically:</p>
<pre><code class="language-objective_C">NSString *myString = [[NSString alloc] init];
</code></pre>
<p>This code declares a pointer variable named “myString” that points to an instance of the NSString class in Objective-C.</p>
<p>The statement <code class="language-plaintext highlighter-rouge">[[NSString alloc] init]</code> creates a new instance of the NSString class using dynamic memory allocation, and initializes it with a default value. The pointer to this new object is returned by the alloc and init methods.</p>
<p>With that said and done, we can go ahead and use this in a method:</p>
<div class="language-objective_c highlighter-rouge"><pre class="highlight"><code><span class="k">-</span> <span class="p">(</span><span class="kt">void</span><span class="p">)</span><span class="n">myMethod</span> <span class="p">{</span>
<span class="n">NSString</span> <span class="o">*</span><span class="n">myString</span> <span class="o">=</span> <span class="p">[[</span><span class="n">NSString</span> <span class="nf">alloc</span><span class="p">]</span> <span class="nf">init</span><span class="p">];</span>
<span class="c1">// Do something with myString
</span><span class="p">}</span>
</code></pre>
</div>
<p>Here’s the thing: The memory has been allocated, but it will then need to be released.</p>
<h3 id="why-do-we-need-to-release-memory">Why do we need to release memory?</h3>
<p>Computers have a limited amount of memory. If you do not release the memory allocated for the object, it will not be freed and won’t be able to hold any other data. Sure, this isn’t a huge deal for, say, one string, but what if we’re failing to release dozens, hundreds or thousands of strings?</p>
<p>This can result in a <mark>memory leak</mark>, which can lead to a shortage of memory, which in turn can cause the program to slow down or crash. In extreme cases, the entire system may become unstable, and other programs may also be affected.</p>
<p>We can release the memory manually, with the <code class="language-plaintext highlighter-rouge">release</code> message, available to all objects in Objective-C.</p>
<div class="language-objective_c highlighter-rouge"><pre class="highlight"><code><span class="k">-</span> <span class="p">(</span><span class="kt">void</span><span class="p">)</span><span class="n">myMethod</span> <span class="p">{</span>
<span class="n">NSString</span> <span class="o">*</span><span class="n">myString</span> <span class="o">=</span> <span class="p">[[</span><span class="n">NSString</span> <span class="nf">alloc</span><span class="p">]</span> <span class="nf">init</span><span class="p">];</span>
<span class="c1">// Do something with myString
</span> <span class="p">[</span><span class="n">myString</span> <span class="nf">release</span><span class="p">];</span>
<span class="p">}</span>
</code></pre>
</div>
<p>This indicates that we’re done with <code class="language-plaintext highlighter-rouge">myString</code> and can release the memory allocated to it.</p>
<h3 id="cool-but-i-work-with-a-garbage-collected-programming-language-is-there-a-reason-this-is-good-to-know">Cool, but I work with a garbage collected programming language. Is there a reason this is good to know?</h3>
<p>I’ll do you one better — here’s four reasons:</p>
<ol>
<li>Debugging: While garbage collection automatically manages memory for you, there may still be cases where the program is not functioning as expected due to memory problems. Understanding manual memory management can help you diagnose and fix these issues.</li>
<li>Performance optimization: Garbage collection can be resource-intensive, and there may be situations where manual memory management can provide a performance boost. Understanding manual memory management can help you identify these situations and write code that is more efficient.</li>
<li>Portability: Not all programming languages use garbage collection, and not all environments support ones that do. For example, when trying writing code for microcontrollers, I had to use a low-level C language and had to carefully, manually manage memory.</li>
<li>Code optimization: Even in languages with Garbage Collection, understanding manual memory management can help you write code that runs quicker. For example, you can use techniques like <a href="https://en.wikipedia.org/wiki/Object_pool_pattern">object pooling</a> to reuse memory instead of allocating and deallocating memory frequently. <strong>Fun fact</strong>: this is how long lists can be rendered in mobile apps.</li>
</ol>
<p>The advantage of manual memory management is that it provides a high degree of control over how memory is used in a program. However, it also requires careful attention to detail.</p>
<p>But, you may be wondering, what happens when you want to use that allocated memory elsewhere, say in…</p>
<h2 id="low-level-concurrent-programming">Low-level concurrent programming</h2>
<p>In short, learning Objective-C helped me understand concurrent programming, which is the practice of allowing multiple tasks or processes to execute simultaneously.</p>
<p>This is something we do very often in our daily coding lives. For example, in JavaScript, we use <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise">promises</a> and <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function">async functions</a> for HTTP requests or to update a database.</p>
<p>When coding macOS apps, we can call upon a background thread to perform asynchronous tasks. Let’s say we have an object for an <code class="language-plaintext highlighter-rouge">ApiWrapper</code> with a method called <code class="language-plaintext highlighter-rouge">postData(jsonData)</code> that makes a <code class="language-plaintext highlighter-rouge">POST</code> request. We can call upon it in a background thread:</p>
<div class="language-objective_c highlighter-rouge"><pre class="highlight"><code><span class="p">[</span><span class="n">apiWrapper</span> <span class="nf">performSelectorInBackground</span><span class="p">:</span><span class="k">@selector</span><span class="p">(</span><span class="nf">postData</span><span class="p">:)</span> <span class="n">withObject</span><span class="o">:</span><span class="n">jsonData</span><span class="p">];</span>
</code></pre>
</div>
<p>How about if we wanted to update the UI of our app in that <code class="language-plaintext highlighter-rouge">postData</code> method?</p>
<p>Here’s the thing: <a href="https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/Multithreading/AboutThreads/AboutThreads.html#//apple_ref/doc/uid/10000057i-CH6-SW21">macOS requires you to only update the UI on the main thread</a>. So what do we do in <code class="language-plaintext highlighter-rouge">postData()</code>? Fortunately, we have a solution for this:</p>
<div class="language-objective_c highlighter-rouge"><pre class="highlight"><code><span class="k">-</span> <span class="p">(</span><span class="kt">void</span><span class="p">)</span><span class="nf">postData</span><span class="p">:(</span><span class="n">NSData</span> <span class="o">*</span><span class="p">)</span><span class="nv">jsonData</span> <span class="p">{</span>
<span class="c1">// Perform HTTP request
</span>
<span class="p">[</span><span class="n">self</span> <span class="nf">performSelectorOnMainThread</span><span class="p">:</span><span class="k">@selector</span><span class="p">(</span><span class="nf">updateUI</span><span class="p">:)</span> <span class="n">withObject</span><span class="o">:</span><span class="p">[</span><span class="n">response</span> <span class="nf">parsedJsonData</span><span class="p">]</span> <span class="n">waitUntilDone</span><span class="o">:</span><span class="nb">NO</span><span class="p">];</span>
<span class="p">}</span>
</code></pre>
</div>
<p>This way, we get direct access to the main thread.</p>
<h3 id="its-all-fun-and-games-until-a-race-condition-arises">It’s all fun and games until a race condition arises</h3>
<p>When coding multi-threaded applications, we need to make sure that accessing shared resources is done so safely. Let’s check out an example:</p>
<div class="language-objective_c highlighter-rouge"><pre class="highlight"><code><span class="k">-</span> <span class="p">(</span><span class="kt">void</span><span class="p">)</span><span class="n">updateSharedValue</span> <span class="p">{</span>
<span class="c1">// This method runs on a background thread
</span> <span class="n">self</span><span class="p">.</span><span class="n">sharedValue</span> <span class="o">+=</span> <span class="mi">1</span><span class="p">;</span>
<span class="p">}</span>
<span class="o">-</span> <span class="p">(</span><span class="kt">void</span><span class="p">)</span><span class="n">startBackgroundTask</span> <span class="p">{</span>
<span class="p">[</span><span class="n">self</span> <span class="nf">performSelectorInBackground</span><span class="p">:</span><span class="k">@selector</span><span class="p">(</span><span class="n">updateSharedValue</span><span class="p">)</span> <span class="nf">withObject</span><span class="p">:</span><span class="nb">nil</span><span class="p">];</span>
<span class="p">}</span>
<span class="c1">// Start two background tasks that update the shared value
</span><span class="p">[</span><span class="n">self</span> <span class="nf">startBackgroundTask</span><span class="p">];</span>
<span class="p">[</span><span class="n">self</span> <span class="nf">startBackgroundTask</span><span class="p">];</span>
<span class="c1">// Wait for the tasks to complete
</span><span class="p">[</span><span class="n">NSThread</span> <span class="nf">sleepForTimeInterval</span><span class="p">:</span><span class="mi">1</span><span class="p">.</span><span class="mi">0</span><span class="p">];</span>
<span class="c1">// At this point, the shared value should be 2, right?
</span><span class="n">NSLog</span><span class="p">(</span><span class="s">@"Shared value: %ld"</span><span class="p">,</span> <span class="n">self</span><span class="p">.</span><span class="n">sharedValue</span><span class="p">);</span> <span class="c1">// Output: Shared value: 1
</span></code></pre>
</div>
<p>What happened? Both background tasks are updating the same shared variable concurrently. Depending on the timing and scheduling of the threads, one task might overwrite the value updated by the other task, leading to incorrect or inconsistent results. This is known as a <mark>race condition</mark>. A race condition is a type of software bug where two or more threads or processes access a shared resource concurrently. In such a case, the final result depends on the order of execution, which can be unpredictable.</p>
<p>It’s like a game of musical chairs where the number of chairs is less than the number of players. When the music stops, everyone rushes to grab a chair, but some players are left standing without a seat. The outcome is unpredictable and depends on the timing of the players’ actions. While this is fun for humans, you wouldn’t want your serious app to behave this way.</p>
<h3 id="mutexes-to-the-rescue">Mutexes to the rescue</h3>
<p>Luckily, we have the ability to use a <mark>mutex</mark> to control shared resources in Objective-C. A mutex is a tool for ensuring that only one thread can access a shared resource (like a variable or a piece of memory) at a time. It works by allowing a thread to “lock” the resource while it’s using it, which prevents other threads from accessing it until the first thread “unlocks” it.</p>
<p>In Objective-C, this is done by invoking the <code class="language-plaintext highlighter-rouge">@synchronized</code> directive. Let’s see it in action by using it in the previous multithreaded example:</p>
<div class="language-objective_c highlighter-rouge"><pre class="highlight"><code><span class="k">-</span> <span class="p">(</span><span class="kt">void</span><span class="p">)</span><span class="n">updateSharedValue</span> <span class="p">{</span>
<span class="c1">// This code can only run if nothing else is synchronized on the sharedValue property
</span> <span class="k">@synchronized</span><span class="p">(</span><span class="n">self</span><span class="p">.</span><span class="n">sharedValue</span><span class="p">)</span> <span class="p">{</span>
<span class="n">self</span><span class="p">.</span><span class="n">sharedValue</span> <span class="o">+=</span> <span class="mi">1</span><span class="p">;</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre>
</div>
<p>With the above change, only one thread can execute this block of code at a time if they are synchronized on the same object. This ensures that modifications to the shared value are made in a thread-safe way. No more musical chairs, all threads know how to queue to access the value.</p>
<h3 id="is-this-relevant-in-modern-day-language-use">Is this relevant in modern-day language use?</h3>
<p>Absolutely! Modern languages have found ways to ensure developers comply with thread safety. For example, the Rust programming language has the <a href="https://doc.rust-lang.org/book/ch04-00-understanding-ownership.html">ownership model built-in as a
feature</a>
with concurrent coding and memory safety in mind.</p>
<p>Overall, understanding thread safety helped me write more robust and performant code, avoid bugs and crashes, and create a better user experience for the users of the applications I developed.</p>
<h2 id="learning-from-history">Learning from history</h2>
<p>Having been around the programming language for over 10 years, it’s been
fascinating to see Objective-C evolve as a language, taking influence by new
competencies and priorities in the tech industry.</p>
<h3 id="automatic-reference-counting-arc">Automatic Reference Counting (ARC)</h3>
<p>Remember all that stuff I wrote about manual memory management? Apple seems to
have agreed that this wasn’t the most optimal way to write code, and created
ARC, which automates this process by inserting retain and release calls at
compile time.</p>
<h3 id="grand-central-dispatch-gcd">Grand Central Dispatch (GCD)</h3>
<p>To improve multithreaded code, Apple created the <a href="https://developer.apple.com/documentation/dispatch?language=objc">Grand Central
Dispatch</a>
API , making several improvements on the safety, code readability and level of
control for multithreaded code.</p>
<h3 id="blocks">Blocks</h3>
<p>Blocks are a language feature that allow developers to create anonymous functions or closures in Objective-C. They are similar to lambda expressions in other languages and are often used for asynchronous programming and concurrent programming.</p>
<p>That’s right! They didn’t exist in the language at first. Below is an example of one:</p>
<div class="language-objective_c highlighter-rouge"><pre class="highlight"><code><span class="kt">void</span> <span class="p">(</span><span class="o">^</span><span class="n">myBlock</span><span class="p">)(</span><span class="kt">int</span><span class="p">)</span> <span class="o">=</span> <span class="o">^</span><span class="p">(</span><span class="kt">int</span> <span class="n">num</span><span class="p">)</span> <span class="p">{</span>
<span class="n">NSLog</span><span class="p">(</span><span class="s">@"The number is %d"</span><span class="p">,</span> <span class="n">num</span><span class="p">);</span>
<span class="p">};</span>
<span class="n">myBlock</span><span class="p">(</span><span class="mi">42</span><span class="p">);</span>
</code></pre>
</div>
<p>The syntax for defining a block looks a bit unusual because it uses the caret
symbol (^) and parentheses to specify its parameters. This is designed to be
similar to <a href="https://www.learnc.net/c-tutorial/c-function-pointer/">C’s function pointer
syntax</a>.</p>
<h3 id="literals">Literals</h3>
<p>Creating objects like arrays or dictionaries used to be quite unwieldly. Not anymore!</p>
<div class="language-objective_c highlighter-rouge"><pre class="highlight"><code><span class="c1">// Old way of creating an NSArray:
</span><span class="n">NSArray</span> <span class="o">*</span><span class="n">myArray</span> <span class="o">=</span> <span class="p">[</span><span class="n">NSArray</span> <span class="nf">arrayWithObjects</span><span class="p">:</span><span class="s">@"apple"</span><span class="p">,</span> <span class="s">@"banana"</span><span class="p">,</span> <span class="s">@"cherry"</span><span class="p">,</span> <span class="nb">nil</span><span class="p">];</span>
<span class="c1">// New way of creating an NSArray:
</span><span class="n">NSArray</span> <span class="o">*</span><span class="n">myArray</span> <span class="o">=</span> <span class="p">@[</span><span class="s">@"apple"</span><span class="p">,</span> <span class="s">@"banana"</span><span class="p">,</span> <span class="s">@"cherry"</span><span class="p">];</span>
</code></pre>
</div>
<h2 id="wrapping-up-how-objective-c-makes-me-feel-today">Wrapping up: How Objective-C makes me feel today</h2>
<p>Looking back at it, I’m so grateful for having spent a good chunk of my early
years with Objective-C 😌. I often say that it taught me computer science, in that
it helped me grasp several concepts that go into my daily work.</p>
<p>More than anything, it taught me to be <mark>versatile</mark>.</p>
<p>Learning the lower-level concepts helped me understand better how computers
work, giving me a better frame of reference when designing optimal
architectures.</p>
<p>Knowing how the underlying system works can help you better
understand the root cause of bugs or issues and work one of our most important
technical muscles: <mark>Problem solving</mark>.</p>
<p>Lastly, having a strong foundation in computer science can make it easier for
you to learn and adapt to new technologies and programming languages.
Versatility is at the heart of this, and I treasure this skill immensely to
this day.</p>Ramón HuidobroSpecial thanks to the incomparable Sylwia Vargas for helping me structure and focus this post better, as well as to Uli for making kind corrections.Asking for Help Effectively as a Software Developer2022-11-13T08:24:00+01:002022-11-13T08:24:00+01:00https://ramonh.dev/2022/11/13/asking-for-help<p>Let’s face it, getting stuck on a programming problem <mark>stinks</mark>, doesn’t it?</p>
<p>When it comes to asking for help, especially when starting out in our careers or when onboarding onto a new codebase/project, I’ve found that we tend to fall into two categories:</p>
<ul>
<li>We ask for help the moment/shortly after we get stuck.</li>
<li>We <del>wait a while</del> <del>wait ages</del> never ask for help, no matter how long we’ve been stuck for.</li>
</ul>
<p>I’ll be the first to admit that I very strongly fall into the latter category, even going as far as to say that to this day, having over 12 years of experience as a software developer, I struggle to accept when it’s time to reach out for help.</p>
<p>Is one better than the other, however? I’d argue neither, really:</p>
<ul>
<li>Spending too little time on trying to solve a problem blocks us from practicing, learning problem solving, and/or catch bugs/UX/DX issues in a project.</li>
<li>On the other hand, however, taking too long or never asking for help can hold a timeline or team back. Maybe someone can offer their perspective and be the saving grace because of their unique perspective or because they’re especially knowledgeable in the problem!</li>
</ul>
<p>What helps is to have a consistent amount of time to try going at a problem before you reach out for help.</p>
<h2 id="how-long-should-i-wait-before-reaching-out-for-help">How long should I wait before reaching out for help?</h2>
<p>The specific amount might vary from situation to project to situation, but I find it’s important to be consistent about it in that context. For example, one heuristic that I developed over time is to give a problem no longer than 30 minutes before reaching out for a helping hand.</p>
<p>This amount of time could be different for you, of course!</p>
<h2 id="asking-for-help-effectively">Asking for help <strong>effectively</strong></h2>
<p>Right, so the title of this post is not just about how to ask for help, but how to do so effectively. How can we maximise the time for ourselves as well as the person helping us out?</p>
<h3 id="asking-narrow-pointed-questions">Asking narrow, pointed questions</h3>
<p>I spent many years working in software support and teaching children to code, and I’m sure a lot of you can relate that the following tends to not be very helpful when being asked for help:</p>
<blockquote>
<p>Could you please help me? My code is broken and nothing works.</p>
</blockquote>
<p>It’s not bad, of course! But it can be better. What I recommend doing is offering as much information as possible upfront in order to get the person helping you to start thinking right away about a solution:</p>
<blockquote>
<p>Hey, I’m having some trouble logging in since updating to the latest login dependency, could you please take a look? I’m updated to 3.1.4.</p>
</blockquote>
<p>This way, we’re priming everyone for the problem as we arrange a time to pair on the issue.</p>
<h3 id="sharing-what-youve-tried">Sharing what you’ve tried</h3>
<p>Another helpful thing to do is to immediately start knocking out possible solutions that might seem like clear suspects to the person helping us. For example:</p>
<blockquote>
<p>I’ve already deleted <code class="language-plaintext highlighter-rouge">node_modules</code>, made sure I’m on node 16, and restarted Docker, just in case.</p>
</blockquote>
<h3 id="reproducing-the-problem">Reproducing the problem</h3>
<p>Another way one can help the person that’ll lend us their help is to outline the precise set of steps to reproduce the problem that’s got us stuck:</p>
<blockquote>
<p>Ok so first I ran <code class="language-plaintext highlighter-rouge">npm install</code>, then I started up the dev environment with <code class="language-plaintext highlighter-rouge">npm run dev</code>, and then navigated to <code class="language-plaintext highlighter-rouge">localhost:4000/login</code> and then got the error message.</p>
</blockquote>
<h3 id="requesting-a-pairing-session">Requesting a pairing session</h3>
<p>It’s been demonstrated that <a href="https://en.wikipedia.org/wiki/Pair_programming">pair programming</a> is a highly effective software development [Cockburn, Alistair, and Laurie Williams. “The costs and benefits of pair programming.” Extreme programming examined (2000): 223-247.], and this absolutely extends to problem solving with regards to getting unstuck. Having someone sit down with you to step through the code, one of you navigating and the other driving is super helpful!</p>
<p>And hey, why just keep it at two?</p>
<h3 id="the-more-the-merrier-requesting-an-ensemble-session">The more, the merrier: Requesting an ensemble session</h3>
<p><a href="https://ensembleprogramming.xyz/">Ensemble programming</a> builds upon the concepts of pair programming by having not just a driver and a navigator, but also an ensemble of folks collaborating on a single coding session, looking up documentation, throwing out suggestions, and rotating at regular intervals. If there are resources to do so, this can be an incredible help session!</p>
<p>This is something we’ve practised before while open source coding live with <a href="https://distributeaid.org/">Distribute Aid</a>, live on stream with the chat as our ensemble. This has proven to be a lot of fun and useful for getting unstuck.</p>
<p>And those are some of the ways I’ve practised asking for help!</p>
<h2 id="what-about-asynchronous-work">What about asynchronous work?</h2>
<p>With so many of us working in remote teams, taking asynchronous communication and collaboration into account is necessary. There’s not too much to differ here, is there? The good thing about asynchronous collaboration is that we can send off a help request as early as we feel necessary, and continue trying alternatives.</p>
<p>The added time also gives us the ability to, once again, asynchronously add further context as we go along. We can also try different avenues for help, be it posts on forums or communication platforms, <a href="https://xkcd.com/979/">being helpful and keeping these up to date</a>, too.</p>
<p>Update! <a href="https://twitter.com/Chad_R_Stewart">Chad</a> was kind enough to add some of his own experience:</p>
<blockquote>
<p>When working on a problem, open a PR and if you run into an issue, push the code. Now you can ask for help asynchronously and can show the problem directly!</p>
</blockquote>
<h2 id="asking-for-help-is-an-essential-part-of-software-development">Asking for help is an essential part of software development</h2>
<p>Just wanted to end this with a reminder: A lot of software is developed collaboratively. And this collaboration not only involves reviewing each others’ code, setting up helpful automations, or planning. It also involves getting others unstuck. Being on the side of offering that help is also tremendously gratifying.</p>
<p>I’d love to hear from you! What tips do you have around asking for help when stuck? Get in touch! I’m on <a href="https://mastodon.online/@hola_soy_milk">Mastodon</a>.</p>Ramón HuidobroLet’s face it, getting stuck on a programming problem stinks, doesn’t it?My experience speaking at DevRelCon 20212021-11-15T08:24:00+01:002021-11-15T08:24:00+01:00https://ramonh.dev/2021/11/15/devrelcon-2021<p>What a year! In 2021, when I started <a href="https://ramonh.dev/2021/04/24/devrel-first-months/">my first role in DevRel</a> I also spoke at this year’s edition of <a href="https://2021.devrel.net/">DevRelCon</a>.</p>
<p>DevRelCon is a series of conferences related to DevRel, or Developer Relations, since <a href="https://developerrelations.com/events/devrelcon-london-2015">2015</a>. This year, it was held in an <a href="https://2021.devrel.net/">online format</a>.</p>
<p>I had the honour of speaking with some of the most thoughtful, clever people. I’m so thankful for this opportunity.</p>
<h2 id="coming-up-with-my-talk-proposal">Coming up with my talk proposal</h2>
<p>When the time came to propose a talk, I thought about what it was that I did a lot of during this year that I could turn into a talk.</p>
<p>That’s when it hit me: Streaming! I’ve been doing loads of it this year! Here’s <a href="https://github.com/meetandeat-at/takeaway-app/pull/366">an example</a> of me streaming with <a href="https://twitter.com/thamyk">Thamara</a> from the <a href="https://github.com/thamara/time-to-leave/">Time to Leave</a> project, getting onboarded to the project for the first time and making a pull request.</p>
<p>This is what my proposal looked like:</p>
<h3 id="coming-to-you-live-inclusive-effective-fun-live-streaming-for-devrel">Coming to you live! Inclusive, effective, fun live streaming for DevRel</h3>
<blockquote>
<p>Adapting to having events online has been a learning experience for us all. How can we keep a constant, steady, interactive experience for our audience?</p>
<p>In this talk, we’ll cover how we created Open Source Thursdays, including the tools we used, the accessibility considerations, as well as how we’ve improved it over time. Anyone can be a great live streamer!</p>
</blockquote>
<p>And wouldn’t you know it, my talk got accepted! I was overjoyed and so nervous!</p>
<h2 id="preparing-to-give-my-talk">Preparing to give my talk</h2>
<p>Everybody prepares talks differently. My approach tends to be as follows:</p>
<p>First, the topic pops into my head, and with it a global outline: 4-5 points or pieces of advice I want to offer.</p>
<p>Next, having written the above title and abstract, I set out to build my slides. My process is to build one ‘title’ slide for each point, and let each section and its corresponding slides grow from there.</p>
<p>Finally, when it comes to practise, again, everybody has a different technique. Here, mine is to get a script or general gist going in my head, and give the talk to myself a handful of times before being satisfied.</p>
<p>That said, I was quite nervous having my debut DevRel talk, so I practised over 5 times, once even an hour before the talk itself! I’ll admit I was happy to have done that.</p>
<p>One piece of feedback I was given during the tech check was to shift my setup or position to face the camera. I recently changed the angle of my phone (heh, indeed, <a href="https://reincubate.com/camo/">I use my phone as my webcam</a>) to be at an angle, as I felt this was more immersive for streams, as shown in the screenshot of one of my personal live streams below:</p>
<p><img src="https://ramonh.dev/assets/blog/2021/11/dev_rel_con_2021/personal_stream.png" alt="Screenshot of my personal stream, showing the camera pointing at me at an angle" loading="lazy" /></p>
<p>The solution was therefore to re-orient myself to try and face the camera in such a way that still allowed me to have that camera angle but still be able to use Google Slides’ speaker notes mode comfortably to offer my presentation.</p>
<p>But then came the first day of the conference!</p>
<h2 id="day-1-defining-devrel-and-surprise-mc-session">Day 1: Defining DevRel and surprise MC session</h2>
<p>The day started great! It was great to hear from wonderful folks, especially the panel on “What do dev advocates actually do?” with Yashovardhan, Srushtika, Daniel and Jess.</p>
<p>Halfway through the day, I was DM’ed to be asked if I could fill last-minute in for somebody as co-MC. How could I possibly say no?</p>
<p>If you haven’t MC’ed and are curious about it, I can highly recommend it! It can be a little indimidating at first, but I try to keep a few things in mind:</p>
<ul>
<li>The audience wants you to succeed, relax!</li>
<li>Your job overall is to keep an eye out for questions and make the speakers shine! Play off of their energy.</li>
<li>I like to have a set of related questions handy, just in case. There are no bad questions!</li>
</ul>
<h2 id="day-2-management-and-metrics">Day 2: Management and metrics</h2>
<p>The second day had talks in a variety of topics, including how to prove the value of DevRel to management, sideways management, which is how DevRel fits in with adjacent departments, and finally managing a DevRel team.</p>
<p>Sadly I couldn’t stick around for a lot of the talks, but particularly appreciated Tim’s talk on value driven DevRel and Bear’s talk on managing your first developer relations team. All the talks I saw I really loved!</p>
<h2 id="day-3-devrel-around-the-world-speaking-startups-my-talk">Day 3: DevRel around the world, speaking, startups, my talk!</h2>
<p>The day started wonderfully with 3 talks on DevRel focussing on tailoring your DevRel work to different regions from Kay, Akanksha and Shedrack.</p>
<p>Next up was the section on “Speaking as part of DevRel”, with yours truly going first! Personally, I was super nervous leading up to it, but it tapered off quickly as I got into the talk, <a href="https://www.youtube.com/watch?v=3eea-AzrWpk">as it usually does</a>.</p>
<p>After Layla and Naomi’s wonderful talks on live coding, and crafting and delivering demos, we then had a lovely panel discussion on topics related to streaming cadence, authenticity and the different topics to stream about.</p>
<p>After that, the relief of being done felt like a rush of adrenaline going away! I broke for lunch and errands, back in time for the section on events, with informative talks from Siddharth, Philipp and Kevin.</p>
<p>I had to work the rest of the day, so I caught a few more talks before breaking for the day. I definitely made sure to catch a few more highlights before being done. My colleague and dear friend <a href="https://twitter.com/nahrinjalal">Nahrin</a> was co-MC’ing during the “DevRel for Startups” portion of the day, which included insightful talks by Shivay and Alex! I also made sure to catch my good friend <a href="https://twitter.com/SuzeShardlow">Suze</a>’s talk on Marketing! Loved those slides.</p>
<h2 id="wrapping-up">Wrapping up</h2>
<p>Having online conferences has been particularly difficult. I find that my best experiences have come from a dedicated audience that interacts and keeps things going! I was so happy to see the community come together and ask questions, share experiences and resources.</p>
<p>It was a joy to be able to contribute. Thank you to <a href="https://twitter.com/matthewrevell/">Matt</a> and the rest of the team for having me and providing such a good speaking experience. 💜</p>Ramón HuidobroWhat a year! In 2021, when I started my first role in DevRel I also spoke at this year’s edition of DevRelCon.Pre-recording conference talks2021-05-11T10:24:00+02:002021-05-11T10:24:00+02:00https://ramonh.dev/2021/05/11/pre-recording-talks<p>Most of the conference talks I’ve given this year have not only been online,
but also pre-recorded.</p>
<p><a href="https://ramonh.dev/2020/10/25/running-a-conference-online/#talks">I went into the reasons for having pre-recorded talks
previously</a>,
as well as the pros and cons of doing so, so this post will focus on how I’ve
done it in my talks so far.</p>
<p>One thing to bear in mind is that this is how I’ve experienced pre-recording
having been giving talks in person for many years.</p>
<h2 id="the-personal-experience">The personal experience</h2>
<p>It’s no secret that performing a talk into your computer is completely
different from doing so in person.</p>
<p>One part I’ve struggled with is that lack of
eye-contact with the audience. Being able to see their reactions to parts of my
talks, or even a chuckle at my silly jokes.</p>
<p>Talking to a computer is not quite the same.</p>
<p>There’s still the fact that I’m talking to folks live, however! Knowing that
folks are tuned in has, over time, help me stay convinced stuff is still
happening as an active conversation.</p>
<p>After trying out proper streaming
recently, I’ve gotten a lot more comfortable talking to a chat. Like anything,
it’s practise!</p>
<p>Doing so to a video recording on the other hand is a whole other feeling too,
and likely a matter of practising as well.</p>
<p>That said, I have a high appreciation for the opportunity to be able to create
this kind of content at home! It gives folks who wouldn’t otherwise be able to
attend such events the opportunity to contribute.</p>
<p>So let’s talk hardware! Here’s my setup.</p>
<p>Precise models are listed in the following sections.</p>
<p><img src="https://ramonh.dev/assets/blog/2021/05/pre-recording-talks/desk_setup_min.jpg" alt="Picture of my desk top. Pictured are two monitors, a microphone, webcam, and more" loading="lazy" /></p>
<h2 id="camera">Camera</h2>
<p>I’ve stuck mostly to recording on my Windows 10 desktop machine, where I’ve got
a <a href="https://www.logitech.com/en-us/products/webcams/c270-hd-webcam.960-000694.html">720p-resolution Logitech
Webcam</a>.
One or two conferences have asked that I provide HD 1080p resolution video, but
since that includes slides and my face is normally only a part of the video
feed, something with a higher resolution wasn’t necessary.</p>
<p>Most laptops (at least MacBooks, as far as I know) nowadays should have a 720p or close
camera on them, which if you’ll be recording your slides as well as your face
should be more than enough.</p>
<h2 id="microphone">Microphone</h2>
<p>Below are some options I’ve considered for recording my speaking audio.</p>
<h3 id="built-in-macbook-microphone">Built-in MacBook microphone</h3>
<p>Given <a href="https://ramonh.dev/2020/10/25/running-a-conference-online/#recording-the-talks">our experience with accepting pre-recorded
talks</a>,
I feel that in order to deliver a decent-quality audio experience, the
microphone built into a laptop can be disruptive due to factors like typing or
the fans on the machine. Another issue is that the quality of the voice audio
tends to depend on how close the speaker is to the microphone itself, so
standing at arms’ length can really be a hindrance.</p>
<h3 id="headphones-headset-pods-buds-what-have-you">Headphones (headset, pods, buds, what have you)</h3>
<p>I’ve found that talks delivered with a microphone close to the speaker goes a
long way toward having good audio quality!</p>
<h3 id="desktop-microphone">Desktop Microphone</h3>
<p>I ultimately went with a <a href="http://www.rode.com/microphones/nt-usb">Rode NT-USB</a>
microphone, costing me about EUR 170.-</p>
<p>It did the trick as far as audio quality went, and the included <a href="https://en.wikipedia.org/wiki/Pop_filter">pop
filter</a> made me a lot more
comfortable!</p>
<p>It does have to be said that I’m using this microphone for <a href="https://ramonh.dev/2021/01/23/recording-podcast/">podcasting</a> as well, so it’s important to weigh
this as a factor.</p>
<h2 id="lighting">Lighting</h2>
<p>Getting plenty of balanced light on your face for your talk is pretty important
for your visibility onscreen, so going for good natural light or investing in
good lighting gear helps! In the past, I’ve tended more towards the former, but
lately a friend of mine pointed me towards a more costly solution which is the
<a href="https://www.elgato.com/ring-light">Elgato Ring Light</a>.</p>
<p>Your mileage may vary depending on what kind of light you get and what your
preference is. Here’s how the difference looks for me without the light on the
left and with it on the right. Both pictures were taken at night with the
room’s light on:</p>
<p><img src="https://ramonh.dev/assets/blog/2021/05/pre-recording-talks/lighting_difference.jpg" alt="Me without the light on in the left, and with it on in the right" loading="lazy" /></p>
<p>The difference might be subtle, but it makes me happy to have an even light on my face!</p>
<h2 id="software">Software</h2>
<p>Alright! Let’s talk software.</p>
<p>When I spoke at <a href="https://futuresync.co.uk/">Future Sync</a> online year, this was
the first time I had to pre-record a talk! They very kindly provided a
walkthrough for recording my talk using the open source software
<a href="https://obsproject.com/">OBS</a>. They recommended to use 3 sources:</p>
<ul>
<li>Window capture for my slides</li>
<li>Video capture for my webcam</li>
<li>Audio capture for my microphone</li>
</ul>
<p>Here it is in action:</p>
<p><img src="https://ramonh.dev/assets/blog/2020/08/online-conf/obs.png" alt="Screenshot of OBS in action showing my webcam in the corner and my slides at the top left" loading="lazy" /></p>
<p>By clicking on “Start recording”, I can immediately start recording!</p>
<p>For more info getting into OBS, I can totally recommend <a href="https://obsproject.com/wiki/OBS-Studio-Quickstart">their
guides</a>.</p>
<h3 id="subtitles">Subtitles</h3>
<p>In my talks, I like to include captions by using the <a href="https://support.google.com/docs/answer/9109474?hl=en">Google
Slides</a>’ built in
functionality. Thank you to <a href="https://twitter.com/codepo8/">Chris</a> for showing
me this functionality!</p>
<h2 id="slip-ups">Slip-ups</h2>
<p>It’s gonna happen. It happens to us all the time! When giving a talk, I’ll say
the wrong thing and have to correct myself briefly, or go to the next slide too
early, or jumble my words, or pause awkwardly for a second. The temptation to
cut that out of the video is <mark>strong</mark>.</p>
<p>When I initially would deal with these slip-ups, I wasn’t well-experienced with
dealing with video, and didn’t consider editing them. So what would I do if I
considered a slip-up egregious enough to warrant intervening in the recoridng?
Why, start over!</p>
<p>And what if, you may ask, the talk was long? I realise this can be extremely
time-consuming. Nowadays, I’ve been using some open source video-editing
software like <a href="https://www.openshot.org/">OpenShot</a> for editing our podcast
episodes and have gotten comfortable with having a minor edit here and there to
make the video experience a little smoother.</p>
<p>Over time, however, I’ve grown more comfortable with leaving these minor
slip-ups in the talk recording. Like I said at the beginning of this section,
these are natural in live conference talks, and can totally happen while
recording! After all, you’re kinda recording “live”, in a way 😅.</p>
<h2 id="conclusions">Conclusions</h2>
<p>The thing to definitely bear in mind is that our experiences pre-recording
talks is unique to us.</p>
<p>I’ve seen folks try out some wonderful things, given that they have the
opportunity to pre-record. I’ve seen folks give talks <a href="https://fosdem.org/2021/schedule/event/codemirror/">while
hiking</a>, or <a href="https://www.youtube.com/watch?v=Qcn0GgDlLfk">make fun use
of editing</a>!</p>
<p>I’ve been using OBS more for streaming lately, and getting quite comfortable
with it. I could try using this to have more of an opportunity to have distinct
cuts in my talks as well as different screen layouts! I don’t necessarily need
to be in the corner, I can be next to my slides, or around them!</p>
<p>I’d love to hear what you’re trying when pre-recording talks, too!</p>Ramón HuidobroMost of the conference talks I’ve given this year have not only been online, but also pre-recorded.My first few months as a developer advocate2021-04-24T10:24:00+02:002021-04-24T10:24:00+02:00https://ramonh.dev/2021/04/24/devrel-first-months<p>I have spent the last 10+ years freelancing and contracting as a software
developer. This January, that changed when I became a developer advocate at
<a href="https://www.codesee.io/">CodeSee</a>.</p>
<p>I’m not gonna lie, it’s been an exhilarating opportunity to dip my toes into Developer Relations, or DevRel.</p>
<p>Next week being the end of my fourth month at CodeSee, and having listened to
an incredible <a href="https://www.youtube.com/watch?v=_q_bWATVJTg">Twitter Spaces</a>
event where folks gave their experiences and insights into what DevRel is, I
felt it was a good time to share my thoughts and experiences over the last
months on this new journey. I wanted to expand on the how and why to get into
DevRel by adding what it’s like a few months in.</p>
<h2 id="some-context">Some context</h2>
<p>CodeSee is working on a dev tool to help folks understand and collaborate on
complex code bases. My job partly consists of thinking about the kinds of
solutions I can offer fellow devs, such as the types of content related to
understanding code, reading code, collaborating, mentoring, etc.!</p>
<p>Touching on the context of the story itself: CodeSee is currently in a
closed Beta and up until a week ago was in stealth, meaning that instead of
trying to have a broad outreach, we were going for a deliberate, 1:1 approach.</p>
<h2 id="on-jobsharing">On jobsharing</h2>
<p>My colleague and friend <a href="https://twitter.com/jesslynnrose">Jessica</a> approached
me toward the end of 2020 with the idea of doing a job share. In short, CodeSee
hired the two of us to fulfil one full-time role, splitting the compensation. I
won’t go into the details of how it works or why it’s beneficial, because
<a href="https://www.codesee.io/blog/on-jobsharing">she’s already done that in a fantastic
way</a>.</p>
<p>Doing this as a part time gig has given me the benefit of not having to
immediately stop doing my freelancing, and instead just taken on smaller
contracts or keep existing ones on the side. Having this flexibility has
allowed me to continue exploring software development and at the same time be
more selective of how I spend that time. It’s a process I’m working on to
better improve my work-life balance.</p>
<h2 id="a-developer-advocate-advocates-for-developers">A developer advocate… advocates for developers!</h2>
<p>I found the <a href="https://twitter.com/dabit3/status/1383873047619276812">following
tweet</a> from Nader Dabit
really helpful:</p>
<blockquote>
<p>In DevRel, if you can make developers both successful and happy then almost everything else will take care of itself.</p>
<p>The problem is that it takes a lot of consistent & sometimes thankless work over a long period of time to get there.</p>
</blockquote>
<p>One of my favourite parts of being involved in tech communities after many
years has been empowering people in their tech journeys. It’s brought me great
amounts of joy to be able to do things like:</p>
<ul>
<li>Coach at workshops like <a href="http://railsgirls.com/">Rails Girls</a></li>
<li>Give workshops for organisations like <a href="https://www.refugeescode.at/">Refugees Code Vienna</a></li>
<li><a href="https://ramonh.dev/speaking/">Give talks at different conferences</a></li>
<li>Help organise and support teams at <a href="https://railsgirlssummerofcode.org/">Rails Girls Summer of Code</a></li>
</ul>
<p>Funnily enough it wasn’t until the idea of working as a developer advocate was
proposed to me that it clicked: this could be something I could be good at
or at the very least already have been doing!</p>
<p>Nevertheless…</p>
<h2 id="hello-impostor-syndrome-my-old-friend">Hello impostor syndrome, my old friend</h2>
<p>My relationship with impostor syndrome confuses me sometimes. It’s never really gone away in the 10+ years that I’ve been building software.
One of the most important things that working as a freelancer has been seeing lots of
different projects, codebases and the ways that different clients work and
learning to get quite comfortable in my discomfort when confronted with an
unfamiliar problem. I even <a href="https://www.youtube.com/watch?v=6T_6THrR5Qo">gave a talk about
this</a> at JSUnconf 2018. It’s a
topic I feel quite comfortable in. Again, that’s not to say that it’s
completely gone, mind you, just manageable.</p>
<p>That is, until I started with this DevRel role. All of a sudden, the impostor
syndrome washed over me, big time! Even though I’d been doing things in-and-out
of being related to DevRel for many years this felt different.</p>
<p>One thing that’s helped me with this has been to just talk to fellow DevRel
folks! <a href="https://twitter.com/jna_sh">Joe</a> was kind enough on the week before my
first day to sit down with me, hear me out, calm my nerves, and recommend some
resources:</p>
<ul>
<li><a href="https://wenger-trayner.com/introduction-to-communities-of-practice/">Introduction to communities of practice</a></li>
<li><a href="https://www.gse.harvard.edu/news/uk/08/05/what-teaching-understanding">What is Teaching for Understanding?</a></li>
<li><a href="https://www.contentstrategy.com/content-strategy-for-the-web">Content Strategy for the Web</a></li>
<li><a href="https://www.amazon.com/Badass-Making-Awesome-Kathy-Sierra/dp/1491919019">Badass: Making Users Awesome</a> by Kathy Sierra</li>
<li><a href="https://www.stephaniemorillo.co/books">Stephanie Morillo’s books</a></li>
</ul>
<p>The shows of kindness from Jess, Joe, and so many others have meant the world to
me. My intention is to pay it forward, so if you’re nervous about getting into
DevRel, please feel free to reach out!</p>
<p>Turns out there are also loads of communities and events related to DevRel!
I’ve been happily helping <a href="https://twitter.com/DevRelSalon">The DevRel Salon</a>
with note-taking, and also joined the <a href="https://devrelcollective.fun/">DevRel
Collective</a>.</p>
<p>This is all made especially easier by the fact that my fellow teammates at
CodeSee have been nothing but supportive, uplifting, and overall wonderful.</p>
<h2 id="on-demoing">On demoing</h2>
<p>One of my main tasks at CodeSee has been holding sessions with interested folks
in trying out the product. As I said before, CodeSee was in stealth and in
closed Beta, so we’ve been inviting folks to 1:1s with myself and/or Jess to
show them the tools in action. This has been the first of a new set of skills
I’ve been developing as a developer advocate, and it’s been fascinating.</p>
<p>The closest thing I’ve been able to compare this to has been giving talks at
conferences or other events. I remember how easy it became after time to
develop a routine in my head for giving a talk, trying, of course, to adapt
these to the audience and context.</p>
<p>When it comes to giving demos of a devtool, however, there are some minor
changes to keep in mind! Most significant is the scale and approach. Since
demos tend to be 1:1, we can cater the demo experience to that one developer or
team lead. One of my favourite pieces of advice was from
<a href="https://twitter.com/Sareh88">Sareh</a> who after a demo advised to open the demo
with a question:</p>
<blockquote>
<p>What do you hope to get out of this demo?</p>
<p>What are your pain points?</p>
</blockquote>
<p>What this does for me is completely change the tone of the demo to be less
one-sided from me talking to demo attendee (demoee?), but opens the discussion
to go both ways and allows me to really, as the job role would imply, to
advocate for the developer I’m showing the tool to! How our tool can make their
job easier.</p>
<h2 id="on-following-up">On following up</h2>
<p>Our goal as a dev tool over the Beta is to get as much feedback as we can from
users and see where things can be improved and how to go about this. This will
involve a lot of following up with leads.</p>
<p>What I mean by following up in this case is, after showing someone CodeSee and
giving them access to the Beta, to message them and see whether they’ve tried
it, seeing if they’re stuck on the install portion, and whether we can pair
with them with this.</p>
<p>A personal struggle of mine has been figuring out how to follow up without
being (or feeling!) overly pushy. Honestly, it’s been the hardest part for me, since I
never want to bother anyone.</p>
<p>I’ve been trying reframing this as an opportunity to empower instead of chasing a
fellow developer and see how we can help them. We are by no means forced to
enter a relationship where they use our tool unless they need it!</p>
<p>Lastly, it’s important to bear in mind that the worst thing that’ll happen is
you’ll be turned down. And that’s ok! It’s not a “never”, it’s a “not now,
maybe not me”. As much better put by <a href="https://flak.is">Flaki</a>:</p>
<blockquote>
<p>I think to me the thing that best helped to frame this has been the
understanding of ‘just because this person/team does not need what you can
offer <em>right now</em>, by being helpful and giving them a great experience you
already sown a seed and you never know when something comes out of that
relationship.’</p>
</blockquote>
<h2 id="producing-content">Producing content</h2>
<p>When coming into the role, the value that CodeSee provides to developers comes
to mind: being a continuous understanding tool, our aim is to help developers
understand, document and collaborate on complex codebases.</p>
<p>Going off of this, the mind begins to wander, how can we as developer advocates
communicate our value this way?</p>
<p>When it comes to blog posts, I can think of the value I’ve had in reading
codebases over the years, and how developers can benefit from doing so. So <a href="https://www.codesee.io/blog/the-value-of-reading-code">I
wrote the article</a>, the
first of hopefully many!</p>
<p>As time goes on, we plan to expand this into other media, such as talks,
videos, and more.</p>
<h2 id="streaming">Streaming</h2>
<p>This last week, Jess and I held a soft launch of our streaming with CodeSee! We had
a friendly chat as an introduction to open source.</p>
<p>It was also an opportunity to try out our tech stack. Since this was a soft
launch, we did it with Zoom, <a href="https://tech.paulcz.net/blog/streaming-from-zoom-to-twitch/">streaming onto
Twitch</a>. We were
adamant about having captioning on our stream, which <a href="https://support.zoom.us/hc/en-us/articles/207279736-Closed-captioning-and-live-transcription">Zoom supports as a paid feature</a>!</p>
<p>We did a <a href="https://twitter.com/hola_soy_milk/status/1385247029224411139">brief, last minute
announcement</a> to
get some friends and loved ones in and had a great time!</p>
<h2 id="open-source">Open Source</h2>
<p>One task I’ve enjoyed greatly during my time at CodeSee is trying our tool on
different open source projects. One advantage of doing this is trying the
CodeSee tracker on multiple codebases and finding points of improvement.</p>
<p>On the other hand, I’ve been getting to see the value that we can bring open
source maintainers by aiding in the onboarding process of their projects. Heck,
by trying to install CodeSee onto a codebase, I’m getting partially onboarded
onto these projects! Since this is something that matters a lot to what we’re
trying to do at CodeSee, getting this experience first hand has been really
valuable.</p>
<h2 id="moving-forward">Moving forward</h2>
<p>Well now that we’re out of stealth, we can start looking into other kinds of
activities, such as collaborating with other folks, giving talks, appearing on
podcasts, and more!</p>
<h2 id="conclusion">Conclusion</h2>
<p>When talking to folks potentially interested in DevRel, I get the impression
that the lucrative nature of the job can be intimidating or seem like an
endgoal. My hope is that by sharing my experiences I can show that learning is
definitely still involved and to invite you to give it a try!</p>
<p>Finding the points where I feel the most comfortable, such as empowering fellow
devs, trying out sticky open source projects, writing blog posts, coming up
with and giving conference talks, or dealing with my impostor syndrome has
helped me really work hard on those specific points.</p>
<p>As I said at the start, working in DevRel has been an absolute joy for me. If
you’ve recently started a DevRel position or are interested in doing so, please
feel free to reach out!</p>Ramón HuidobroI have spent the last 10+ years freelancing and contracting as a software developer. This January, that changed when I became a developer advocate at CodeSee.Hosting a Podcast from Scratch2021-01-23T10:24:00+01:002021-01-23T10:24:00+01:00https://ramonh.dev/2021/01/23/recording-podcast<p>A few months ago, <a href="https://twitter.com/TimeaTurdean">Timea</a> and I started
hosting the <a href="https://twitter.com/gendercoffee">Gender Equality over Coffee</a>
podcast, as part of the <a href="https://www.womentechmakers.at/podcast/">Women Techmakers
Vienna</a> organisation. It’s been a
blast!</p>
<p>Now that we’ve been at it for a while, I wanted to share how we make this
podcast happen. Huge thanks goes out to Jason C. McDonald for <a href="https://dev.to/codemouse92/self-hosting-a-podcast-4b3f">this very
helpful article</a> and to
<a href="https://twitter.com/informatom">Stefan Haslinger</a> for all your support!</p>
<h2 id="recording">Recording</h2>
<p>Right off the bat, Gender Equality over Coffee is a <a href="https://www.youtube.com/watch?v=iCQv8ZI8SAo&list=PLVr4my5fwE_UF4wun_SE11EDyDbg7hmJq">video
podcast</a>.
Recording will therefore involve not only audio, but also video.</p>
<p>We currently record our episodes on <a href="https://zoom.us/">Zoom</a>. The advantages
offered by Zoom are:</p>
<ul>
<li>Recording of video and audio.</li>
<li>Guests can join using their Zoom client.</li>
<li>Zoom is used by loads of folks lately.</li>
</ul>
<h2 id="editing">Editing</h2>
<p>Given we’re dealing with a video podcast, we’re splicing an intro and outtro as
well as any tiny edits each episode needs with an open source app called
<a href="https://www.openshot.org/">OpenShot</a>.</p>
<p>A way to splice the video clips together making the transitions as seamless as
possible is using fade ins and outs:
<img src="https://ramonh.dev/assets/blog/2021/01/podcast-recording/openshot.png" alt="Screenshot of OpenShot in action, showing how to add fade ins and outs" loading="lazy" /></p>
<h2 id="converting">Converting</h2>
<p>Using OpenShot, we then export an <code class="language-plaintext highlighter-rouge">mp4</code> or <code class="language-plaintext highlighter-rouge">mov</code> video in the ‘1080p 29.9fps’
format. When we tried exporting in 30fps (frames-per-second) format, the sync
would break up. Not sure why this happened, but it was consistent! Switching to
the 29.9 option eliminated this issue.</p>
<p>We can then use an open source app called VLC to <a href="https://www.vlchelp.com/convert-video-audio-mp3/">convert the exported video
file to mp3</a>.</p>
<p>Once all of that’s exported, we can then upload the video file to YouTube.</p>
<p>Then comes the audio!</p>
<h2 id="audio-file-hosting">Audio file hosting</h2>
<p>While we get this podcast off the ground, we’re operating on a volunteer,
non-profit basis. Therefore, we’re keeping costs low. One thing I learned from
the article posted above is that we can host the episodes for free on
<a href="https://archive.org/">Archive.org</a>. This allows us to upload and host the
<code class="language-plaintext highlighter-rouge">mp3</code> files on their website and link to them from the RSS feed (more on that
later 👇).</p>
<p>After it’s processed, we’ve got access to a direct link.</p>
<p>Quick note about uploading to Archive.org, however: <strong>It has to be one of CC0,
Creative Commons or Public Domain</strong>. We ended up going with Creative Commons.</p>
<h2 id="rss-feed">RSS feed</h2>
<p>The great thing about RSS is its format being flexible enough to be expanded
upon and dynamically generated!</p>
<p>The <a href="https://www.womentechmakers.at">Women Techmakers Vienna</a> website is built
using <a href="https://jekyllrb.com/">Jekyll</a>, a static website generator. In our
<code class="language-plaintext highlighter-rouge">_config.yml</code> file, we define a episode collection, as well as a general set of
podcast metadata:</p>
<div class="language-yaml highlighter-rouge"><pre class="highlight"><code><span class="s">collections</span><span class="pi">:</span>
<span class="s">podcast_episodes</span><span class="pi">:</span>
<span class="s">output</span><span class="pi">:</span> <span class="s">true</span>
<span class="s">podcast</span><span class="pi">:</span>
<span class="s">title</span><span class="pi">:</span> <span class="s">Gender Equality Over Coffee</span>
<span class="s">description</span><span class="pi">:</span> <span class="s">Let's talk intersectional gender equality from the perspective of organizations and individuals that strive for a more inclusive world.</span>
<span class="s">url</span><span class="pi">:</span> <span class="s">/podcast.xml</span>
<span class="s">author</span><span class="pi">:</span> <span class="s">Women Techmakers Vienna</span>
<span class="s">email</span><span class="pi">:</span> <span class="s">wtmvie@gmail.com</span>
<span class="s">logo</span><span class="pi">:</span> <span class="s">/img/podcast/logo_feed.JPG</span>
<span class="s">language</span><span class="pi">:</span> <span class="s">en</span>
<span class="s">category</span><span class="pi">:</span> <span class="s">Business</span>
<span class="s">subcategory</span><span class="pi">:</span> <span class="s">Non-Profit</span>
<span class="s">type</span><span class="pi">:</span> <span class="s">episodic</span>
<span class="s">explicit</span><span class="pi">:</span> <span class="s">false</span>
<span class="s">complete</span><span class="pi">:</span> <span class="s1">'</span><span class="s">no'</span>
<span class="s">block</span><span class="pi">:</span> <span class="s1">'</span><span class="s">no'</span>
</code></pre>
</div>
<p>We can then define podcast episodes inside the <code class="language-plaintext highlighter-rouge">_podcast_episodes</code> folder, write their shownotes using Markdown and metadata:</p>
<div class="language-plaintext highlighter-rouge"><pre class="highlight"><code>---
layout: podcast
title: 0. The what, who and why of Gender Equality over Coffee
author: Women Techmakers Vienna
isStaticPost: true
image: ../podcast/logo.JPG
episode: 0
episodeType: full
explicit: false
length: 394
date: 2020-12-29
audio: host://url_to_episode.mp3
---
# SHOW NOTES
# TRANSCRIPTION
</code></pre>
</div>
<p>You might’ve noticed we’re using a <code class="language-plaintext highlighter-rouge">podcast</code> layout in the above markdown.
Well, given that this is Markdown data, we can use it to render the show notes
as well as audio player in the website!</p>
<p>Here’s the layout:</p>
<div class="language-plaintext highlighter-rouge"><pre class="highlight"><code>---
layout: post
---
<br>
<audio controls preload='auto' style='width: 100%;'><source src='{{ page.audio }}'></audio>
{{ content }}
</code></pre>
</div>
<p>We use an <code class="language-plaintext highlighter-rouge">audio</code> tag to play the mp3 file! <a href="https://www.womentechmakers.at/podcast_episodes/episode_2.html">Here’s how it looks in
action</a>.</p>
<p>Deploying this live gives us <a href="https://www.womentechmakers.at/podcast.xml">a working RSS feed</a>!</p>
<h2 id="transcribing">Transcribing</h2>
<p>Making a show as accessible as possible was a pretty important goal for us from
the outset.</p>
<p>Below is an example of how the transcripts look:</p>
<div class="language-plaintext highlighter-rouge"><pre class="highlight"><code>- **TIMEA**: Hey Ramón!
- **RAMÓN**: Hey Timea!
- **TIMEA**: Let's talk gender equality.
- **RAMÓN**: I love the idea. Gimme a second I just gotta grab my coffee, I hope you've got yours, too!
- **TIMEA**: Yep, right here.
</code></pre>
</div>
<p>Transcribing can be a lot of work. One very helpful tip we found is, as
previously mentioned, we upload our videos on YouTube. One thing I learned is
that you can download the automatically generated YouTube subtitles from
YouTube Studio:</p>
<p><img src="https://ramonh.dev/assets/blog/2021/01/podcast-recording/youtube_subtitles.png" alt="Screenshot of YouTube studio, allowing to download subtitles" loading="lazy" /></p>
<p>We can then use these as a basis to clean up the transcriptions. These then go
into the show notes and the RSS feed.</p>
<h2 id="publishing">Publishing</h2>
<p>Last part is getting the podcast out there into the world! What we can do is submit the podcast to different listings, such as:</p>
<ul>
<li><a href="https://podcastsconnect.apple.com/">Apple Podcasts</a></li>
<li><a href="https://panoptikum.io/pan/feed_backlogs/new">Panoptikum</a></li>
<li><a href="https://podcasters.spotify.com/gateway">Spotify for Podcasters</a></li>
</ul>
<h2 id="up-and-running">Up and running!</h2>
<p>With that, we’ve been up and going! We just gotta keep at it and let our show evolve and improve over time.</p>
<p>I hope these tips will help you get up and running. I’d love to hear about what
your thoughts and experiences are with creating podcasts. <a href="https://twitter.com/hola_soy_milk">Hit me
up!</a></p>Ramón HuidobroA few months ago, Timea and I started hosting the Gender Equality over Coffee podcast, as part of the Women Techmakers Vienna organisation. It’s been a blast!How we did audio/video for our online conference2020-10-25T10:24:00+01:002020-10-25T10:24:00+01:00https://ramonh.dev/2020/10/25/running-a-conference-online<p>This year’s <a href="https://www.womentechmakers.at/">Women Techmakers Vienna</a>
conference was meant to take place in March 2020, but had to be postponed.
After three months of monitoring the possibility of holding it later, we as a team
decided to hold it online.</p>
<p>Near the beginning of July 2020, we decided to hold the annual Women
Techmakers Vienna conference purely online on the 7th and 8th of August 2020.</p>
<h3 id="doesnt-that-make-prep-time-one-month">Doesn’t that make prep time one month?!</h3>
<p>One month… ish. We had already made a lot of preparations toward having the conference in March. This included having things like:</p>
<ul>
<li>An already setup lineup that we could reach out to in the interest of speaking at our online event</li>
<li>A list of sponsoring partners that we could update and see if they were interested in partnering</li>
</ul>
<p>Another thing worthy of note is that Women Techmakers Vienna is run by a group of volunteers. It is not a for-profit conference.</p>
<p>What follows is the story of the series of decisions we made, why we made them, and how they panned out!</p>
<h2 id="audience-interaction">Audience interaction</h2>
<p>One of the critical ingredients of a conference is giving the audience the
opportunity to interact with each other, ask questions to speakers, voice
concerns, give feedback, chat with partners, and so on!</p>
<p>Coming up with a plan to provide as satisyfing (not identical!) an experience
as possible to in-person events for our attendees meant we looked at how other
events that have been run online have been doing this: chat platforms!</p>
<p>When it comes to giving the audience a platform to interact with one another,
asking questions to speakers or approaching partners, we decided to try a
centralised approach.</p>
<p>I’ve been seeing a lot of tech communities pop up on <a href="https://discord.com">Discord</a>, such as that from <a href="https://queerjs.com/">Queer.js</a>, <a href="https://chat.vuejs.org">Vue.js</a> or <a href="https://discord.gg/rust-lang">Rust</a>.</p>
<p>While Discord might be associated with games, they’ve <a href="https://blog.discord.com/discord-is-for-your-communities-3d14464d4c7b">recently</a> adopted a more community-oriented approach.</p>
<p>It does, however, require having a separate app running from the viewing platform. Say, YouTube or Twitch chat (more on those later).</p>
<p>So why use Discord instead of…</p>
<h3 id="slack">Slack</h3>
<p>One big argument is that we already have a Slack server used for organising our
activities internally!</p>
<p>However, unlike Slack, Discord doesn’t require one account (email/password pair) per server. You can have one identity across several servers.</p>
<p>Furthermore, in the interest of providing a safe, inclusive environment for viewers, Discord allows users to block one another if needed.</p>
<p>We also appreciated the prominently visible use of <a href="https://support.discord.com/hc/en-us/articles/214836687-Role-Management-101https://support.discord.com/hc/en-us/articles/214836687-Role-Management-101">roles</a>.</p>
<h3 id="youtubetwitch-chat">YouTube/Twitch chat</h3>
<p>I’ve lumped these both into the same space due to them having a similar functionality.</p>
<p>Discord allows us to set up multiple channels, enabling viewers to talk about different topics. That includes being able to do so between the broadcasts.</p>
<p>We ultimately decided to disable YouTube live chat, so we could focus and encourage people to use Discord.</p>
<h3 id="why-not-have-several-chat-platforms">Why not have several chat platforms?</h3>
<p>We believe that having a unified space for chat would make it easier to bring
together the community during the talks. Having this unification meant they
were easier to moderate, which is something we had to consider as well when
picking Discord.</p>
<p>We ran the risk of having fewer people join chat overall, but we went with it.</p>
<h3 id="wont-the-discord-server-become-a-ghost-town-once-the-event-is-over">Won’t the Discord server become a ghost town once the event is over?</h3>
<p>I have seen this happen to some event-oriented platforms.</p>
<p>However, since Women Techmakers Vienna is a yearly-run conference, accompanied by a series of meetups throughout the year, we decided to instead turn it into a community platform. We’ll continue to announce meetups and other events to encourage activity.</p>
<h3 id="voice-chat">Voice chat</h3>
<p>One thing we thought of leveraging was the built-in voice channels for Discord.
Since our event was split over two days, we decided to have a breakfast hangout
on the Saturday morning. People could just pop into the voice channel on
Discord and hang out! It was a lot of fun.</p>
<h2 id="viewing-platform">Viewing Platform</h2>
<p>We chose to stream our conference on YouTube. Our reasons for this choice include:</p>
<ul>
<li>We can embed the stream onto <a href="https://www.womentechmakers.at/">our website</a>.</li>
<li>We can disable the chat completely (as written above, we want to encourage the chat to take place in a single space).</li>
<li>YouTube allows people to rewind during the broadcast, even while the show is live.</li>
<li>Once the show is over, the stream is available for viewing immediately thereafter.</li>
<li>The stream allows us to have a URL we can stream closed captions to (more on that lower down).</li>
</ul>
<h2 id="streaming-platform">Streaming Platform</h2>
<p>We ended up going with <a href="https://streamyard.com">StreamYard</a> as our streaming platform.</p>
<p><img src="https://ramonh.dev/assets/blog/2020/08/online-conf/sy-test.png" alt="Screenshot of me broadcasting myself on StreamYard" loading="lazy" /></p>
<p>StreamYard allows us to set up a broadcast that gets piped directly into YouTube.</p>
<p>The main video shows what’s currently being broadcast. Below that are the scenes available to us. Scenes in this case are the layouts we can display to viewers. From left to right:</p>
<ul>
<li>Single speaker</li>
<li>Full screen speakers</li>
<li>Small speakers with background</li>
<li>Person speaking bigger</li>
<li>Speaker with shared video</li>
<li>All speakers with shared video</li>
<li>Shared video full screen</li>
</ul>
<p>We can also have banners of text that either run through or float at the bottom of the screen:
<img src="https://ramonh.dev/assets/blog/2020/08/online-conf/banners.png" alt="Screenshot showing banner saying "We'll be back at 09:30 CEST" in broadcast" loading="lazy" /></p>
<p>We can share slides, screens as well as videos.</p>
<p>One thing to note is that it’s not free. We ended up paying USD $25 for a month
of the service. There are open source alternatives, such as
<a href="https://obsproject.com/">OBS</a>, but we found that StreamYard was the best
choice for us at the time. We could onboard fellow organisers to use it quickly
and had I not been able to run A/V for the conference, one of them would’ve
taken over.</p>
<p>Another advantage from using StreamYard is speakers can be given a link to join the stream, without the need to use something like Skype to connect.</p>
<p>When it comes to sharing pre-recorded content, StreamYard allows us to share a
browser tab with the video file loaded in it. Note that this only works on
Google Chrome (at least of the time of writing). Closer to the event,
StreamYard added the functionality of uploading 5-minute clips, which was
helpful for things like partner video reels, but it didn’t provide things like
volume control, so we opted for browser tabs for the event.</p>
<h2 id="talks">Talks</h2>
<p>For this year, we decided to ask our speakers to please send us a pre-recorded video of their talk.</p>
<p>The alternative would be to allow speakers to tune in and give their presentation live.</p>
<p>Let’s go over some of the reasons for having pre-recorded talks:</p>
<ul>
<li>We minimise connection issues. These won’t be completely lost, as of course we still depend on the connection of the person streaming the videos as well as that of the person watching the stream. However, we won’t depend on the connection of each speaker.</li>
<li>From an organisational standpoint, we will know immediately in advance how long each talk will be, and manage our schedule/breaks accordingly.</li>
<li>By allowing speakers to pre-record their videos, they can (if they want) add all the post-production flair they want! One example I particularly enjoyed was Michael Jolley’s presentation on <a href="https://www.youtube.com/watch?v=Qcn0GgDlLfk">living puppets</a> at <a href="https://halfstackconf.com/">HalfStack Online 2020</a>.</li>
<li>Newcomers to public speaking have the opportunity to practise, get feedback from us, and see what they can improve (harder said than done. I can barely watch myself on video! 🙈) .</li>
</ul>
<p>One glaring problem with pre-recorded talks is the loss of interactivity, for
sure. I’ve often found myself wondering, if the conference is pre-recorded,
what’s stopping me, the viewer from just watching the talks later if/when
they’re uploaded? What value can we add to viewers to encourage them to tune in
live?</p>
<p>But there are ways to make it work! For example, speakers can chat with viewers, add links, or even answer questions while their talk is streaming. This is something we actively encouraged.</p>
<p>We also added a live portion Q&A segment at the end of each talk, where our emcee would chat with the speaker, relay questions, or ask some of their own.</p>
<p>Lastly, with the remote aspect of the conference, we can open up our CFP to
speakers from around the world who wouldn’t normally be able to travel to
Vienna to deliver a talk. If they’re in a time zone that would make it
impossible for them to join live for one reason or another, they can still
provide a talk!</p>
<p>However, based on my experiences and talking with folks experienced in giving
talks, pre-recording talks as a speaker is challenging. That’s a story for
the next section, though.</p>
<h3 id="recording-the-talks">Recording the talks</h3>
<p>Similar to how <a href="https://futuresync.co.uk/">Future Sync</a> instructed their
speakers, we recommended that speakers use <a href="https://obsproject.com/">OBS</a> to
record their talks.</p>
<p>We asked them to approximate a majority portion at the top left for their
slides, and their camera at the bottom right, like in the screenshot below:</p>
<p><img src="https://ramonh.dev/assets/blog/2020/08/online-conf/obs.png" alt="Screenshot of OBS in action" loading="lazy" /></p>
<p>Though it worked for the most part, we did encounter some difficulties with the
videos. Not all of us are experienced with recording videos, and it’s a very
different environment from delivering a talk onstage, since you need to worry
about things like microphone, camera, lighting and audio levels/sources on your
own.</p>
<p>Because of these new challenges, we had some issues such as dual audio sources
in the videos leading to a strange echoing, as well as full screen issues
causing the slides to cover the speakers’ camera feed.</p>
<p>One thing to bear in mind is that some speakers, given they have the option,
might prefer not to show their camera. Come to think of it, especially given
we’re providing live captioning, maybe it’s not essential?</p>
<p>Interestingly, one of our speakers opted for having just their video feed,
without any slides. The space is there to be creative! It is, however,
important to make sure that the talks are in a format that’s accessible.</p>
<p>In the future, preparing a screencast they can use or having 1-on-1 sessions
with speakers that desire it would be more helpful.</p>
<h3 id="captions">Captions</h3>
<p>As soon as we decided the conference was taking place online, we immediately
set out to figure out closed captioning. It’s a small step we could take toward
having as inclusvive an experience as possible.</p>
<p>The service we went with was one called <a href="https://whitecoatcaptioning.com/">White Coat
Captioning</a>. I’ve been fortunate to catch
their services at conferences I’ve spoken at or organised, like EuRuKo both in
<a href="https://euruko2018.org/">2018</a> and <a href="https://euruko2019.org/">2019</a>, <a href="https://2019.jsconf.us/">JSConf US
2019</a>. They were a no-brainer and so I contacted them.</p>
<p>What I found out was pretty fascinating! They provide us with a
<a href="https://streamtext.net/">StreamText</a> URL that attendees could open and follow
the captioning provided live. What I later learned though is that with
StreamText, you can plug in a <a href="https://support.google.com/youtube/answer/3068031?hl=en">YouTube Live Subtitle Ingestion
URL</a> and the
StreamText content will be automatically carried over to YouTube’s closed
captioning system! This gives viewers the option to follow the captions along
either with StreamText or directly on YouTube! Neat!</p>
<h3 id="panel-discussion">Panel discussion</h3>
<p>We offered this as an opportunity for folks to have a live Q&A with speakers
and/or invited guests. StreamYard allows us to have up to ten guests at a time
in the stream, so we were easily able to have six participants in the
discussion.</p>
<p>We did have a technical issue where one of the panelists had the power go out
in the town they were living in, but they joined a bit later with a mobile
connection. Ultimately, not too stressful for them, thankfully. Phew!</p>
<h3 id="lightning-talks">Lightning talks</h3>
<p>Similar to how we held the panel discussion, we were able to leverage
StreamYard and invite 6 people on to give their lightning talks. In order to
make things smooth, we held these without slides (so no screensharing).</p>
<p><mark>Story time</mark>: We did have an issue where one speaker couldn’t get
their machine to work with StreamYard, something about their browser being
incompatible. Some quick thinking led me to try opening Discord on Chrome,
starting a video call with them and then sharing that tab and making very sure
to be muted, myself. It totally worked! Never hurts to think outside the box. ☺️</p>
<h2 id="on-the-day">On the day</h2>
<p>Below are some of my thoughts on the day of the conference:</p>
<ul>
<li><mark>Timing</mark> is essential! Need a cup of coffee or a trip to the bathroom? Make sure a colleague is there to keep an eye on things.</li>
<li>Keep a <mark>separate tab</mark> with the video open so you can see how long until the emcee and speaker will do a Q&A.</li>
<li>A <mark>separate voice chat</mark> on Discord with a fellow organiser helps keep you company and a check on things.</li>
<li>Checklists help! When switching between different scenes (emcee to pre-recorded video) having a procedure for opening the tab, making sure “Share audio” is checked when sharing the tab (This is <mark>crucial</mark>!!)</li>
</ul>
<h2 id="after-the-day">After the day</h2>
<p>A few things we can improve:</p>
<h3 id="speaker-support">Speaker support</h3>
<p>We usually hold speaker coaching sessions before the conference. This includes
giving feedback on slides, talk structure, and the like.</p>
<p>I think we can expand on this to provide support on how to pre-record talks,
check lighting, audio, and so on!</p>
<p>With a budget and good timing, we can also help provide equipment, like a
microphone. Some speakers ended up using their laptop microphone!</p>
<h3 id="audio-level-tweaks">Audio level tweaks</h3>
<p>When receiving the talks, it’s important that we check them as soon as possible
to make sure the audio levels are consistent. Note that this doesn’t mean audio
quality! I’m talking about making sure that the volume is consistent so as to
make sure that switching from a talk, to the emcee, to a partner video doesn’t
startle our viewers!</p>
<p>Audio levels can be tested by having a colleague watching on a test YouTube
stream! This helped us a lot figure out where our weak spots were.</p>
<h3 id="audience-integration">Audience Integration</h3>
<p>While being at home means that some audience members can relax with a hot drink
and enjoy the talks, or have them on a separate screen while working on other
stuff, having the option to chat with speakers and fellow attendees is a huge
part of the conference experience!</p>
<p>One thing I found really fun and ingenious from this year’s <a href="https://roguelike.club/event2020.html">Roguelike
Celebration</a> was their idea to build a
<a href="https://dev.to/lazerwalker/using-game-design-to-make-virtual-events-more-social-24o">game into their event
experience</a>.</p>
<p>Another thing I noticed was that, having we left our breakfast hangout voice
channel open, people were beginning to use it! They’d just hang out there and
chat with one another either during talks or in between them! Definitely
something to consider.</p>
<h2 id="and-that-was-just-the-av-portion-of-the-conference">And that was just the A/V portion of the conference!</h2>
<p>It goes without saying that these conferences require a lot of work in order to
be successful, and I’m so lucky to have a <a href="https://www.womentechmakers.at/team/">team of
volunteers</a> around me who put so much
work into it.</p>
<p>Hope this helps you when figuring out these aspects! If you’ve got some you
wanna tell me about, <a href="https://twitter.com/hola_soy_milk">let me know</a>!</p>Ramón HuidobroThis year’s Women Techmakers Vienna conference was meant to take place in March 2020, but had to be postponed. After three months of monitoring the possibility of holding it later, we as a team decided to hold it online.Running GUI Linux applications in WSL 22020-09-30T10:24:00+02:002020-09-30T10:24:00+02:00https://ramonh.dev/2020/09/30/wsl2-gui-apps<p>I’ve been really enjoying my time using the Windows Subsystem for Linux, or
<a href="https://docs.microsoft.com/en-us/windows/wsl/about">WSL</a> for short. With it,
I’ve been able to develop web software on Windows with the Linux environment familiar to me.</p>
<p>One issue I’d been running into, particularly with WSL 2 is that I sometimes
need to run GUI applications from inside Ubuntu.</p>
<h2 id="why-would-you-need-to-run-gui-applications-in-wsl">Why would you need to run GUI applications in WSL?</h2>
<p>Well, one example is running system tests on Rails apps I develop. In my client
work, I run <a href="https://github.com/teamcapybara/capybara">Capybara</a> using Firefox in headless mode using <a href="https://github.com/mozilla/geckodriver/releases">geckodriver</a>, which works great on WSL!</p>
<p>Problem is, sometimes, I need to do so with headless mode turned off. Maybe
some tests are failing, or I just wanna make sure that some layout parts are
working well. If I wanted to, running <code class="language-plaintext highlighter-rouge">firefox</code> from within WSL would me the
following error:</p>
<div class="language-plaintext highlighter-rouge"><pre class="highlight"><code> Unable to init server: Could not connect: Connection refused
Error: cannot open display: :0
</code></pre>
</div>
<p>What’s this display about, then?</p>
<h2 id="xserver">Xserver</h2>
<p>The <code class="language-plaintext highlighter-rouge">display</code> mentioned above refers to an
<a href="https://www.x.org/releases/X11R7.7/doc/man/man1/Xserver.1.xhtml">Xserver</a>,
used by Linux to manage window, keyboard and mouse interaction with
applications.</p>
<p>I found a tool that does just this for us! I can run an Xserver on Windows
called <a href="https://sourceforge.net/projects/xming/">Xming</a>, which comes with
several utilities to run an Xserver on Windows 10.</p>
<h2 id="setting-up-xming-on-windows">Setting up Xming on Windows</h2>
<p>After downloading and installing the utility, we can run the Xlaunch utility with the following settings:</p>
<ul>
<li>“Multiple windows” selected, display number set to 0</li>
<li>“Start no client” selected</li>
<li>“Clipboard” and “No Access Control” selected</li>
<li>Click on Finish</li>
</ul>
<p>Xming will now be running in the background!</p>
<h2 id="setting-up-wsl-for-interacting-with-xming">Setting up WSL for interacting with Xming</h2>
<p>We next need to set up the <code class="language-plaintext highlighter-rouge">DISPLAY</code> environment variable to point to our running Xming server. With WSL 1, I could set this to be <code class="language-plaintext highlighter-rouge">:0</code>:</p>
<div class="language-plaintext highlighter-rouge"><pre class="highlight"><code> export DISPLAY=:0
</code></pre>
</div>
<p>This, however, won’t work with WSL 2, due to, <a href="https://docs.microsoft.com/en-us/windows/wsl/compare-versions">among other
differences</a>, it
running a Virtual Machine. We can, however, set the IP address to forward this to.</p>
<p>This is where the discussion on
<a href="https://github.com/microsoft/WSL/issues/4106#issuecomment-501532834">GitHub</a>
really came in handy:</p>
<blockquote>
<p>How are you seeing your DISPLAY variable in your Linux environment? Currently you will need to specify the IP address of the host, you can easily find this by looking at your /etc/resolv.conf file:</p>
<div class="language-plaintext highlighter-rouge"><pre class="highlight"><code>root@BENHILL-DELL:/mnt/c/Users/benhill# cat /etc/resolv.conf
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateResolvConf = false
nameserver 192.168.110.177
</code></pre>
</div>
<p>Then you’ll run:</p>
<p>export DISPLAY=192.168.110.117:0</p>
<p>You may also need to launch vcxsrv with the -ac argument.</p>
<p>This is an area that we are working on improving in an update soon.</p>
</blockquote>
<p>Even more helpfully, a poster <a href="https://github.com/microsoft/WSL/issues/4106#issuecomment-501885675">lower down</a> suggested the following:</p>
<div class="language-plaintext highlighter-rouge"><pre class="highlight"><code>export DISPLAY=$(cat /etc/resolv.conf | grep nameserver | \
awk '{print $2}'):0
</code></pre>
</div>
<p>Adding this line to the shell profile (<code class="language-plaintext highlighter-rouge">~/.zshrc</code> in my case) will set it up on every shell instance I start up! If you wanna do it in the current one, you can do so with the following:</p>
<div class="language-plaintext highlighter-rouge"><pre class="highlight"><code>$ source ~/.zshrc
</code></pre>
</div>
<p>Sadly, we’re not done yet. Running <code class="language-plaintext highlighter-rouge">firefox</code> now just causes a big ol’ nothing to happen.</p>
<h2 id="windows-firewall">Windows Firewall</h2>
<p>The last step before I was able to run firefox is to allow Xming through the firewall. To do that, we’ll:</p>
<ol>
<li>Open up “Windows Security” (I usually type it into the Start menu)</li>
<li>Click on the “Firewall & network protection”</li>
<li>Click on “Allow an app through the firewall”</li>
</ol>
<p>This will show the list of apps and their connection permissions through private and public networks, as shown below:</p>
<p><img src="https://ramonh.dev/assets/blog/2020/09/wsl-gui/windows_firewall.png" alt="Allowed applications for Windows Defender Firewall" loading="lazy" /></p>
<p>You need to find “Xming X Server” on this list and make sure it’s allowed (checked) for a public network!</p>
<h3 id="why-does-xming-need-to-communicate-on-public-networks">Why does Xming need to communicate on public networks?</h3>
<p><a href="https://docs.microsoft.com/en-us/windows/wsl/compare-versions#accessing-a-wsl-2-distribution-from-your-local-area-network-lan">According to the docs</a>:</p>
<blockquote>
<p>WSL 2 has a virtualized ethernet adapter with its own unique IP address.</p>
</blockquote>
<p>Our WSL instance isn’t connected to the local network directly, but rather through Windows using this virtualized ethernet adapter. It’s as if there was an ethernet cable connected between Ubuntu and Windows.</p>
<p>This is different from how WSL 1 worked. WSL 2 requires that we setup networking like we would another virtual machine. If you’re curious, <a href="https://www.youtube.com/watch?v=yCK3easuYm4">this video</a> offers a good in-depth look at how WSL networking works.</p>
<p>Once we do this, back in WSL, we can just run <code class="language-plaintext highlighter-rouge">firefox</code>, and voilà, it’s there! Heck, let’s run <code class="language-plaintext highlighter-rouge">gedit</code> too:</p>
<p><img src="https://ramonh.dev/assets/blog/2020/09/wsl-gui/wsl_apps.jpg" alt="Firefox and GEdit running in Windows" loading="lazy" /></p>
<h2 id="whats-next">What’s next?</h2>
<p>Well! Shortly before I finished writing all of this down, I found this here
<a href="https://twitter.com/craigaloewen/status/1308452901266751488">tweet</a>.</p>
<p>The WSL team is working on getting this feature built in! Soon, it looks like
GUI Linux apps will have not only their own taskbar icons, but proper shadowed
windows and full compatibility without having to worry about Xservers,
firewalls, or anything like that. Thank you so much, WSL team!</p>
<p>I’m still happy I learned what I learned about Xservers, and until then, you
can use GUI apps by doing what I did above!</p>
<p>Big, big thanks to <a href="https://www.joseforthe.win/">José</a> for pairing with me to figure this out!</p>Ramón HuidobroI’ve been really enjoying my time using the Windows Subsystem for Linux, or WSL for short. With it, I’ve been able to develop web software on Windows with the Linux environment familiar to me.Granting user read/write permissions to a USB device in Ubuntu Linux on boot2020-09-22T10:24:00+02:002020-09-22T10:24:00+02:00https://ramonh.dev/2020/09/22/usb-device-linux-startup<p>One of my recent projects involved having a cash register print receipts to a
thermal printer. In order to do this, we had a <a href="http://sinatrarb.com/">Sinatra</a>
server running on boot on the device that would, upon receiving a <code class="language-plaintext highlighter-rouge">POST</code> request, print out the <a href="https://github.com/escpos/escpos">ESC/POS</a> data.</p>
<p>The way I did this was by creating a <a href="https://medium.com/@benmorel/creating-a-linux-service-with-systemd-611b5c8b91d6">custom
service</a>
on Ubuntu that automatically starts on boot that runs the following script:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> <span class="nv">$ </span>su - <NON_ROOT_USER> -l -c <span class="se">\</span>
<span class="s2">"cd /path/to/code && </span><span class="se">\</span><span class="s2">
bundle exec ruby app.rb /dev/usb/lp0"</span>
</code></pre>
</div>
<p>In this script, we’re passing the file path <code class="language-plaintext highlighter-rouge">/dev/usb/lp0</code> to the server. This
is the file descriptor for the thermal printer I wish to print to.</p>
<p>With all this set, what happens when I power on the device and send in a <code class="language-plaintext highlighter-rouge">POST</code> request?</p>
<p>Sadly, I get an error saying that I don’t have permission to write to the file
<code class="language-plaintext highlighter-rouge">/dev/usb/lp0</code>. I can run the following command to fix this, though:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> <span class="nv">$ </span>sudo chown <NON_ROOT_USER> /dev/usb/lp0
</code></pre>
</div>
<p>Once I enter it, I can print to my heart’s (and paper capicity’s) content! However, once the device gets rebooted, the file will get re-created and not
have the permissions I assigned. Well, darn.</p>
<p>I did find a solution that could help me, though!</p>
<h1 id="custom-udev-rules">Custom udev rules</h1>
<p>This was when I learned about <a href="https://wiki.debian.org/udev"><code class="language-plaintext highlighter-rouge">udev</code></a>, the system for managing devices in Debian systems. With it, I can write specific rules for recognising devices:</p>
<blockquote>
<p>udev allows for rules that specify what name is given to a device, regardless of which port it is plugged into. For example, a rule to always mount a hard drive with manufacturer “iRiver” and device code “ABC” as /dev/iriver is possible. This consistent naming of devices guarantees that scripts dependent on a specific device’s existence will not be broken.</p>
</blockquote>
<p>This is <em>exactly</em> what I need! I can assign all usb devices plugged into the Linux machine to be owned by <code class="language-plaintext highlighter-rouge">NON_ROOT_USER</code>.</p>
<p>By following the instructions, I created a file in the directory <code class="language-plaintext highlighter-rouge">/etc/udev/rules.d</code> called <code class="language-plaintext highlighter-rouge">99-perm.rules</code> with the following line:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> <span class="nv">SUBSYSTEM</span><span class="o">==</span><span class="s2">"usb"</span>, <span class="nv">OWNER</span><span class="o">=</span><span class="s2">"<NON_ROOT_USER>"</span>
</code></pre>
</div>
<p>With this rule in place, rebooting the machine immediately made it work!</p>
<h1 id="making-it-more-secure">Making it more secure</h1>
<p>You might’ve been asking yourself:</p>
<blockquote>
<p>Does this mean all USB devices will be owned by <code class="language-plaintext highlighter-rouge">NON_ROOT_USER</code>?</p>
</blockquote>
<p>And you would be absolutely correct. Maybe we want to only grant permissions for that one device, which we can totally do! We can base it on the serial number of the device. For example, for my thermal printer, I can find this out with the following command:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> <span class="nv">$ </span>udevadm -a /dev/usb/lp0
</code></pre>
</div>
<p>This’ll print out some results, including this line:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> ATTRS<span class="o">{</span>serial<span class="o">}==</span><span class="s2">"L29955839962630040"</span>
</code></pre>
</div>
<p>Let’s open up <code class="language-plaintext highlighter-rouge">/etc/udev/rules.d/99-perm.rules</code> again, and replace the line we added with the following:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> <span class="nv">SUBSYSTEM</span><span class="o">==</span><span class="s2">"usb"</span>, ATTRS<span class="o">{</span>serial<span class="o">}==</span><span class="s2">"L29955839962630040"</span>, <span class="nv">OWNER</span><span class="o">=</span><span class="s2">"<NON_ROOT_USER>"</span>
</code></pre>
</div>
<p>See with this we’re specifically telling <code class="language-plaintext highlighter-rouge">udev</code> that we wanna own this specific
thermal printer. A quick reboot and score, it works!</p>
<h1 id="whats-next">What’s next?</h1>
<p>A solid question! There is certainly more we can do to make sure this system works.</p>
<p>For example, if I were to have multiple thermal printers on a device, then it
wouldn’t be guaranteed to have the name <code class="language-plaintext highlighter-rouge">lp0</code>. For this case we can use the
<code class="language-plaintext highlighter-rouge">SYMLINK+=</code> attribute to specify a filename it’ll always definitely have. So
instead of <code class="language-plaintext highlighter-rouge">lp0</code> or <code class="language-plaintext highlighter-rouge">lp1</code> I could point my script to <code class="language-plaintext highlighter-rouge">thermal-printer</code> or something:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> <span class="nv">SUBSYSTEM</span><span class="o">==</span><span class="s2">"usb"</span>, ATTRS<span class="o">{</span>serial<span class="o">}==</span><span class="s2">"L72010011070626380"</span>, <span class="nv">OWNER</span><span class="o">=</span><span class="s2">"<NON_ROOT_USER>"</span>, SYMLINK+<span class="o">=</span><span class="s2">"usb/thermal-printer"</span>
</code></pre>
</div>
<p>This will then create a <a href="https://en.wikipedia.org/wiki/Symbolic_link">symbolic
link</a> called
<code class="language-plaintext highlighter-rouge">/dev/usb/thermal-printer</code> that I can point my Ruby server to!</p>
<p>Big thank you to <a href="http://reactivated.net/writing_udev_rules.html">Daniel Drake</a>
for this helpful guide on how to write rules for <code class="language-plaintext highlighter-rouge">udev</code>!</p>
<p>I’m eager to hear your thoughts or ideas for improvements, so <a href="https://twitter.com/hola_soy_milk">hit me up on
Twitter</a>!</p>Ramón HuidobroOne of my recent projects involved having a cash register print receipts to a thermal printer. In order to do this, we had a Sinatra server running on boot on the device that would, upon receiving a POST request, print out the ESC/POS data.Running a learn together coding group remotely2020-06-13T10:24:00+02:002020-06-13T10:24:00+02:00https://ramonh.dev/2020/06/13/study-group-online<p>Our coding study group <a href="https://study-jams.github.io">Study Jams</a> needed a place
online where folks could come together remotely, share and learn together, as we’ve
done in person for a number of years.</p>
<p>Here’s the story of how we managed to make it work!</p>
<!-- more -->
<h2 id="whats-a-coding-study-group">What’s a coding study group?</h2>
<p>A solid question! What we call a study group has several other names, like
‘Hack and Learn’, or ‘Coffee and Code’, or ‘Learn Together’, and those are just
the ones off the top of my head!</p>
<p>The premise is similar across these, being
regularly-held event where people gather, usually in the early evening and:</p>
<ul>
<li>Discuss coding side projects</li>
<li>Chat all things general about coding</li>
<li>Work on coding side projects</li>
<li>Get help on from others if you’re stuck on something</li>
<li>Introduce people or get introduced to programming</li>
</ul>
<p>At Study Jams, we strongly emphasise how important it is to have a <a href="https://study-jams.github.io/#conduct">positive,
welcoming</a> community.</p>
<h2 id="what-changed-when-going-remote">What changed when going remote?</h2>
<p>Not being able to meet face to face presented a number of challenges we
had to face (sorry). This includes factors like not being able to easily move
over to looking at someone’s screen without interrupting others, or having a
side conversation with a person one-on-one while still being able to absorb the
general flow of the conversation.</p>
<p>Overall, attending a meetup online is
fundamentally different from doing so in person. As my good buddy
<a href="https://flak.is/">Flaki</a> put it oh-so-well:</p>
<blockquote>
<p>…it’s very different not just because the environment is different — but that we ourselves are different, behaving differently!</p>
<p class="name">💜</p>
</blockquote>
<h2 id="community-dynamics">Community dynamics</h2>
<p>Keeping in mind that being exclusively online makes it harder for people to
directly interact, we need to consider what helps people speak up and/or make
the most of hacking together:</p>
<h3 id="catch-up-or-introduction-round">Catch-up or introduction round</h3>
<p>Given that we’re a smaller group (around 4-6 people at a time per session) not
a whole lot of introduction to who we are is needed, but some time to catch up
since the last session helps get things going. Greasen the wheels, if you will!</p>
<p>If people would rather not have much to say, that’s fine, naturally! It’s
important to impose no pressure.</p>
<p>I’ve found that having these helps get the conversation and the topics going.</p>
<h3 id="technical-issues">Technical issues</h3>
<p>As online meetings become more prominent through remote work, we all get used
to the initial song and dance:</p>
<ul>
<li>“Can you hear me?”</li>
<li>“You’re muted”</li>
<li>“And <em>bark bork</em> to your dog, too!”</li>
</ul>
<p>Given the relaxed nature of the learn together event, we were able to push
through it.</p>
<h3 id="long-silences">Long silences</h3>
<p>I’ll admit that I have a bit of a fear of long silences during these kinds of
events. When having these in person, I was able to see people just hacking away
on their projects. But in an online context, where (I had initially assumed)
the primary goal is to talk, I was initially scared of the silences, and tried
to keep conversations flowing.</p>
<p>It wasn’t until I attended my first <a href="http://berline.rs/">Hack and Learn</a> that I
noticed how similar these silences were to their in-person equivalents. It was
reassuring even, to know we were all just hacking on our own thing!</p>
<h3 id="enforcing-the-code-of-conduct">Enforcing the Code of Conduct</h3>
<p>Moderation is key to enforceability during online events. Having a volunteer
to help me moderate, and be outside of the core organisers helps keep us
accountable.</p>
<p>Furthermore, reminding folks of the existence of said code of conduct at the
beginning of a coding session, having it be immediately visible on the website
and having a popup reminding folks of its existence when joining a session are
a few of the ways ways we can make it visible and more importantly make it
clear that we take it seriously.</p>
<p>If you’re wondering why it’s necessary to have a code of conduct,
<a href="https://www.ashedryden.com/blog/codes-of-conduct-101-faq#coc101why">here’s</a>
a few reasons written far better than I could have.</p>
<h2 id="collaborative-notetaking">Collaborative notetaking</h2>
<p>One big takeaway I had from visiting the online editions of the <a href="http://berline.rs/">Berlin Rust
community’s Hack and Learn</a> events is the concept of taking
notes to keep track of what was brought up and links shared during the meetup.
One thing that blew me away was that they kept a link to the previous session’s
notes!</p>
<p>This serves the purpose of letting folks follow along at their own pace and keep
these for later, but what I found myself doing was also going through the older
note sets of previous events and seeing what was shared there too! Very handy.</p>
<p>The following is a breakdown of the options for this that we’ve dabbled with:</p>
<h3 id="hackmd"><a href="https://hackmd.io">HackMD</a></h3>
<p>HackMD is a tool for collaborative markdown writing. We were able to use it to
share link, code and other resources in real time as we spoke about different
topics. The app is open source and can be self-hosted, but they have a free
storage plan for open teams, which we are!</p>
<p>Just because we are a tech-oriented study group, however, doesn’t mean we can
assume people will be proficient or familiar with Markdown. This is where the
rich text editing options come in handy!</p>
<p><img src="https://ramonh.dev/assets/blog/2020/05/study-jams/hackmd.jpg" alt="Checkin' out HackMD" loading="lazy" /></p>
<p>Watching the text update itself as people chime in with links to resources or
projects of theirs is incredibly gratifying.</p>
<h3 id="google-docs"><a href="https://docs.google.com">Google Docs</a></h3>
<p>Right off the bat, I’ll say we didn’t try Google Docs, but it being one of the
most-used tools for this led us to consider it as an option.</p>
<p>The familiarity of a text-processing environment coupled with the ability to do
so in a collaborative way made it an interesting venue. Big concern for us was
that not everyone has a Google account.</p>
<p>Turns out, though, that you <mark>can</mark> view and edit Google Docs <a href="https://support.google.com/drive/thread/12552865?hl=en&msgid=12639250">without
an
account</a>,
so there’s that!</p>
<p>We ultimately settled on HackMD.</p>
<h2 id="voicevideo-chat">Voice/Video Chat</h2>
<p>One of the most commonly seen media used for online meetups is using video
chat, in the format of each user having a camera and a microphone.</p>
<p><img src="https://ramonh.dev/assets/blog/2020/05/study-jams/online-chat-test.jpg" alt="Screenshot showing a test of all online platforms we toyed with running simultaneously" loading="lazy" /></p>
<p><em>ALL the platforms!</em></p>
<p>As shown in the above image, we had quite the experiment! We were jumping
around a whole bunch of these, which I’ll go over now:</p>
<h3 id="jitsi-meet"><a href="https://meet.jit.si/">Jitsi Meet</a></h3>
<p>Real quick, we opted not to try out Zoom, due in part to security issues that
come with its ubiquity (<a href="https://en.wikipedia.org/wiki/Zoombombing">Zoombombing</a> comes to mind as an
example), and the fact that we had a local Jitsi instance available to us,
thanks to <a href="https://twitter.com/informatom">Stefan</a>, one of our dedicated
members, and his generosity!</p>
<p>Jitsi is an alternative to Zoom, allowing us to have video chats with
screensharing on the fly. A few reasons we were drawn to it include it being
open source software, and having our video chats be fully encrypted!</p>
<p>The way this works is by using a browser API called
<a href="https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API">WebRTC</a>, short
for Web Real-Time Communication. This allows a Jitsi server to set up the
connection, but the actual transfer of data only takes place between the video
chats, without the server acting as an intermediary.</p>
<p>This being a WebRTC technology also affords Jitsi the feature of having any
number of chat rooms. The URL of the room creates that room dynamically,
meaning that room will exist provided there’s at least one user in that call.
After that, poof! It’s gone. It also allows us as moderators to enable a room
only once a moderator comes in. This also prevents abuse.</p>
<p>This same advantage however becomes its caveat. There were some issues getting
the implementation of WebRTC on Firefox, which is <a href="https://github.com/jitsi/jitsi-meet/issues/4758">being worked
on</a>. At the time of writing,
users on Firefox can access Jitsi meetings, but Stefan noticed some increased
loads coming from those users.</p>
<p>I haven’t experienced this myself, as our meetings tend not to exceed 10
people, but there would appear to be a <a href="https://community.jitsi.org/t/maximum-number-of-participants-on-a-meeting-on-meet-jit-si-server/22273/3">limitation of 75 people per
room</a>,
though this is on Jitsi’s server itself. Not just that, but as shown in the
link here, the experience starts to suffer with 35. Something to bear in mind!</p>
<p>Something that Zoom provides that’s been pretty successfuly during meetups
however is the concept of breakout rooms, where people are randomly assigned a
room during a break with a handful of folks to chat to. This can in theory be
done with Jitsi, but requires some manual room assignment.</p>
<h3 id="mozilla-hubs"><a href="https://hubs.mozilla.com/">Mozilla Hubs</a></h3>
<p>Brought to you by Mozilla, it’s Hubs!</p>
<p>I could go over what Hubs is, but who better to explain than <a href="https://hubs.mozilla.com/docs/welcome.html">the Hubs team themselves</a>! In their own words:</p>
<blockquote>
<p>Hubs is a social VR platform that runs right in your browser. With Hubs you can create your own virtual space with a single click and invite others to join using a URL. No installation process or app store required.</p>
<p>Watch videos with friends, give presentations, or meet with colleagues remotely. Hubs makes it easy to connect with others and share images, videos, 3D models, and more.</p>
<p>Hubs works across platforms. Got a VR headset? Awesome! If not, you can use your desktop computer, laptop, tablet, or mobile device.</p>
</blockquote>
<p>Essentially, we can use Hubs to set up a room where folks can come in and hang
out, but in 3D! Folks with most kinds of devices can join in and chat!</p>
<p><img src="https://ramonh.dev/assets/blog/2020/05/study-jams/first-hangout.jpg" alt="4 cool cats hanging out in hubs" loading="lazy" /></p>
<p>Having these nice avatars means we can roam with as much anonymity as we want.
We can also use objects in 3D space imported from <a href="https://hubs.mozilla.com/docs/spoke-adding-scene-content.html">3D modelling
environments</a> to
enable screen sharing, or… webcam
sharing.</p>
<p><img src="https://ramonh.dev/assets/blog/2020/05/study-jams/hubs-sc.jpg" alt="Screenshot of hubs with my face hovering in the sky, as someone's Santa Claus avatar watches my face" loading="lazy" />
<em>As you can see, Santa felt my presence.</em></p>
<p>I very much appreciated the Hubs team’s
<a href="https://hubs.mozilla.com/docs/intro-events.html">tips</a> on running successful
events, such as <a href="https://discord.com/">Discord</a> (A community chat/voice/gaming
platform) server integration, setting it up with your code of conduct, and
more.</p>
<p>Another factor that’s makes Hubs an extremely interesting choice is it aims to
solve a big problem with most video chat services as a concept.</p>
<p>See, at an in-person meetup if you like you can easily have side-conversations
by taking folks aside to talk to them. This doesn’t work in environments like
Jitsi, though, because everyone talks at the same volume, no exceptions nor
possibilities for side conversations. We could perhaps set up side rooms, but
the action of moving from one conversation to another naturally is kind of
lost.</p>
<p>That’s where the Hubs comes in with spatialised voice chat!</p>
<h4 id="what-is-spatialised-voice-chat">What is spatialised voice chat?</h4>
<p>Unlike video conferences, where it’s expected that only one person can talk at
a time, spatialised voice chat involves making the sound come from (and be as
loud as) the position of the person’s avatar is at the time. If someone is far
away from you, you’ll hear their voice more faintly than you would right next
to you.</p>
<p>Hubs has this feature from the get-go making it a breeze to have this kind of
voice chat.</p>
<p>Trying this out as an experiment was so much fun! I can imagine it would have
been even moreso with a VR headset, which we didn’t happen to have, so I don’t
have first hand experience from within the group!</p>
<p>When it came to using Hubs as a space to do social coding, that’s when we came
upon a couple of sticking points.</p>
<p>One thing that I hadn’t considered moving into this kind of a 3D space,
especially in a laptop/desktop browser where you’re expected to navigate said
3D space, is that these controls aren’t immediately familiar to someone who’s
relatively new to the concept. Don’t get me wrong, this has nothing to do with
the on-screen instructions or documentation, <a href="https://hubs.mozilla.com/docs/intro-hubs.html">on the
contrary</a>, they do a terrific
job of giving you a primer to navigate the 3D space!</p>
<p>Problem is that this requires a certain amount of time to become familiar with
the concept of navigating a 3D space, from a first-person perspective, even! We
spent some time trying to drive home the idea that someone could not see what
their avatar looked like because they were looking at the world in Hubs through
the eyes of their avatar, for example.</p>
<p>Furthermore, maybe ironically seeing as I just mentioned the wonders of
spatialised voice chat, we did find ourselves wondering many times if we could
just set a general volume for everybody and disable the spatialised one. This
was mostly due to the fact
that:</p>
<ul>
<li>We’re a small group</li>
<li>We wanted to play around with Hubs’ features off to the side, but found that if we were on the other side of the map spawning rubber duckies, we weren’t able to hear each other!</li>
</ul>
<h3 id="the-online-town"><a href="https://theonline.town/">The Online Town</a></h3>
<p>This platform was brought up during my first attendance of <a href="http://berline.rs">Berlin’s Rust Hack
and Learn</a>.</p>
<p>When we were playing around with Hubs a few weeks later somebody said:</p>
<blockquote>
<p>Is there a 2D version of this?</p>
</blockquote>
<p>That reminded me, this is exactly what Online Town aims to do!</p>
<p>This is a free service that allows you to set up free
rooms to have spatialised chats!</p>
<p>Similar to how Hubs does it, you have an avatar (with fewer options for
customisation). You then walk around a 2D world and chat with each other!</p>
<p><img src="https://ramonh.dev/assets/blog/2020/05/study-jams/online-town.jpg" alt="Screenshot of the four of us hanging out in Online Town, showing our webcams and avatars" loading="lazy" />
<em>Now with faces to match the avatars!</em></p>
<p>One strong difference you might notice with Hubs is that from Online Town right
away allows folks to show their webcams. It creates a border around each avatar
and the webcame to help distinguish which avatar is speaking.</p>
<p>As previously mentioned, Online Town has spatialised chat. This extends not
only to the voice chat, but also the video chat! The further you move from an
avatar, the more faint the video feed becomes, like they’re fading away! Their
voice becomes more distant as well.</p>
<p>The immediate advantage here for people who are less familiar with navigating a
3D space is that the onboarding process is a lot quicker. The top-down viewing
angle of the characters helps folks understand where they are relative to
others.</p>
<p>Online Town made it really easy for us to break off and have separate
conversations and even keep background ones if we wanted to. Having fewer
“toys” than in Mozilla Hubs kept us focused (though this is not really an
issue, as we were experimenting!).</p>
<p>The one major caveat I found in using Online Town for the purposes of shared
learning is that there is currently no option for screen sharing. Given that
it’s a fairly new platform however, I’m confident more will come! In a way, it
already seems to be happening with <a href="https://gather.town/">Gather</a>, an
initiative that was “spun off” from Online Town.</p>
<p>In hindsight though, given that we were a small group (the 4 pictured above)
the question then became…</p>
<h2 id="do-we-really-need-spatialised-chat-at-a-small-group-size">Do we really need spatialised chat at a small group size?</h2>
<p>This became a sticky question.</p>
<p>I’m not saying we didn’t, but it did start coming up when it was only around 4
people in a learn together environment. I think this’ll be different when we
try learning together in a bigger group size! For the rest of our day of
experimentation, we stuck to Jitsi.</p>
<h2 id="what-other-options-are-there-that-came-up">What other options are there that came up?</h2>
<p>A few that we didn’t get around to:</p>
<h3 id="calla"><a href="https://github.com/capnmidnight/Calla">Calla</a></h3>
<p>Calla is a plugin for Jitsi Meet servers that offer a service not unlike Online
Town’s, where folks can do spatialised chat! We didn’t get around to trying it,
as it would have had to be installed, but it looked promising. If it’s
developed on top of Jitsi Meet, it probably supports screensharing, too! Will
have to investigate.</p>
<h3 id="visual-studio-code-live-share"><a href="https://visualstudio.microsoft.com/services/live-share/">Visual Studio Code Live Share</a></h3>
<p>This just casually came up during the Rust Berlin “Hack and Learn” and it blew my
mind. We were using Online Town and somebody brought up an issue they were
having with their code. When they realised Online Town didn’t have
screensharing, they just opened up a Live Share instance on their VS Code.</p>
<p>It was amazing! We were able to connect, almost like we were screen sharing,
except from the “comfort” of our code editors. Soon, you won’t even need to
have VS code installed, because <a href="https://docs.microsoft.com/en-us/visualstudio/liveshare/quickstart/browser-join">they’re working on a browser
version!</a>.</p>
<p>This isn’t all that new as a concept, sure, <a href="https://www.redhat.com/sysadmin/ssh-tmux-screen-sharing">it can be done with a terminal and
tmux</a>, which works
great for my Vim key fingers, but having it built into one of the most popular
text editors opens things up a whole bunch!</p>
<p>With this kind of editor sharing, one major part of the need for screen sharing
is reduced! Would it be completely gone? No. We’d still need it for sharing
things like websites, reading docs, giving presentations, etc.</p>
<h2 id="wrap-up">Wrap-up</h2>
<p>When we started out as a study group in person, a few of us preferred to spend most
of the time rockin’ some headphones and being around people while they hacked
on their stuff. Others wanted to relax, have a coffee, and see how they could
help folks get unstuck in their coding. Or others just wanted to hang out and
talk tech or community. Being around people working on their stuff can be a
huge motivator!</p>
<p>We hope replicate this experience in an online fashion. Have we figured it out
and have a perfect recipe, now that we’ve been at it for a few months?</p>
<p>Absolutely not.</p>
<p>What I’ve learned during this time is that this tech and environment is still
at a fledgling stage. There’s lots to try and learn. Some of it might stick,
some of it might not. Some of it will work wonders for some groups, but not for
others. It’s important to try stuff out!</p>
<p>Yo, I’d love to hear about what your thoughts and experiences are with doing hack togethers online. <a href="https://twitter.com/hola_soy_milk">Hit me up!</a>.</p>Ramón HuidobroOur coding study group Study Jams needed a place online where folks could come together remotely, share and learn together, as we’ve done in person for a number of years.