Jekyll2021-07-04T12:39:20+00:00https://erwinvaneyk.nl/feed.xmlDeferred PostsPersonal blog.Erwin van EykServerless applications: Why, when, and how?2021-04-01T00:00:00+00:002021-04-01T00:00:00+00:00https://erwinvaneyk.nl/serverless-use-cases<p><strong>Simon Eismann, Joel Scheuner, Erwin van Eyk, Maximilian Schwinger, Johannes Grohmann, Nikolas Herbst, Cristina L Abad, Alexandru Iosup - IEEE Software 2020</strong></p>
<p><strong>Abstract</strong> — Serverless computing shows good promise for efficiency and ease-of-use. Yet, there are only a few, scattered and sometimes conflicting reports on questions such as Why do so many companies adopt serverless?, When are serverless applications well suited?, and How are serverless applications currently implemented? To address these questions, we analyze 89 serverless applications from open-source projects, industrial sources, academic literature, and scientific computing—the most extensive study to date</p>
<p><a href="https://arxiv.org/pdf/2009.08173" class="btn btn--success btn--large">Publication</a></p>
<p><a href="https://arxiv.org/pdf/2008.11110" class="btn btn--success btn--large">Technical Report</a></p>Erwin van EykSimon Eismann, Joel Scheuner, Erwin van Eyk, Maximilian Schwinger, Johannes Grohmann, Nikolas Herbst, Cristina L Abad, Alexandru Iosup - IEEE Software 2020How to use private repositories with Go modules2020-07-01T00:00:00+00:002020-07-01T00:00:00+00:00https://erwinvaneyk.nl/private-repositories-with-go-mod<p>Since the release of 1.13, Go has native Go Modules support, with <code class="highlighter-rouge">go mod</code>. Similar to other tooling in the Go ecosystem, like <code class="highlighter-rouge">go fmt</code> for code formatting, it is intended to be the default, community-concended approach to dependency management. It supersedes existing approaches, such as Glide and Dep, and integrates smoothly within the existing Go workflow.</p>
<p>It works out of the box, without any additional configuration.</p>
<p>However, that is only true when all dependencies are publicly available. With dependencies on private packages,
additional configuration is needed.</p>
<h2 id="resolving-dependencies-over-ssh-instead-of-https">Resolving dependencies over SSH instead of HTTPS</h2>
<p>Without any additional steps, trying to load a private dependency into the Go Modules project, will result in the following error:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>go: downloading github.com/myorg/privaterepo v0.0.0-20200408100711-ed766a2975ce
go get github.com/myorg/privaterepo: github.com/myorg/privaterepo@v0.0.0-20200408100711-ed766a2975ce: verifying module: github.com/myorg/privaterepo@v0.0.0-20200408100711-ed766a2975ce: reading https://sum.golang.org/lookup/github.com/myorg/privaterepo@v0.0.0-20200408100711-ed766a2975ce: 410 Gone
server response:
not found: github.com/myorg/privaterepo@v0.0.0-20200408100711-ed766a2975ab: invalid version: git fetch -f origin refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /tmp/gopath/pkg/mod/cache/vcs/13e63a509893edc19353a80fa2c6e28db213d146f72fe43ba65c1ec86624027b: exit status 128:
fatal: could not read Username for 'https://github.com': terminal prompts disabled
</code></pre></div></div>
<p>The problem here is that, by default, Go’s dependency resolving approach attempts to download the package with git over HTTPS.</p>
<p>We want to (instead of using HTTPS) download the private dependency over SSH, such that we can use the local SSH keys to gain access to the package repository. This assumes that you have configured SSH with credentials that can access the private repositories. If you have not, <a href="https://docs.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent">Github has an excellent tutorial how to set it up</a>.</p>
<p>Because this is not something we can change in Go, what we can do is to configure git to rewrite all HTTPs to the equivalent git URLs:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git config --add --global url."git@github.com:".insteadOf https://github.com
git config --add --global url."git@bitbucket.org:".insteadOf https://bitbucket.org
git config --add --global url."git@gitlab.com:".insteadOf https://gitlab.com
</code></pre></div></div>
<h2 id="by-passing-goproxy-for-private-dependencies">By-passing GOPROXY for private dependencies</h2>
<p>Although this has been resolved in the newer versions of Go, in earlier versions resolving the dependencies over SSH still does not completely solve the issue. When we try to download the private dependency, with the SSH patch, the following error will occur:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>go: github.com/myorg/privaterepo@v0.0.0-20200512204819-655dcc74320a: reading https://proxy.golang.org/github.com/myorg/privaterepo/@v/v0.0.0-20200512204819-655dcc74320a.mod: 410 Gone
</code></pre></div></div>
<p>This error occurs due to the introduction of a new default for <code class="highlighter-rouge">GOPROXY</code> in Go 1.13. From that version on, Go uses a module mirror (https://proxy.golang.org) to resolve the modules. This mirror obviously will not contain or be able to find the private repositories.</p>
<p>To fix this, there are a couple of options:</p>
<p><strong>GOPROXY</strong>
A crude solution to resolve this issue is to simply skip the proxy. This will avoid Go to hit the intermediate proxy servers completely and directly contact Github. To do this, set:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export GOPROXY=direct
</code></pre></div></div>
<p><strong>GOPRIVATE</strong>
Another option is to specifically mark packages as private by adding them to <code class="highlighter-rouge">GOPRIVATE</code>. You can either mark specific packages:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export GOPRIVATE=github.com/myorg/privaterepo
</code></pre></div></div>
<p>Alternatively, you could mark all packages in your organization as private:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export GOPRIVATE=github.com/myorg
</code></pre></div></div>
<p>To permanently use this fix, you can either export these variables in your shell start-up scripts, or you can override the defaults of Go:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>go env -w
</code></pre></div></div>
<p>To verify that the changes have persisted, you can see the environment used for Go commands with:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>go env | grep 'GOPROXY\|GOPRIVATE'
</code></pre></div></div>
<h2 id="conclusion">Conclusion</h2>
<p>With the fixes in place you should be able to access all private repositories that you can accessible with your loaded SSH credentials. In case, you want to use HTTPS or cannot rely on SSH, there are alternative (in my opinion, more cumbersome) <a href="https://medium.com/swlh/go-modules-with-private-git-repository-3940b6835727">documented approaches that rely on Github access tokens</a>.</p>
<h2 id="further-reading">Further reading</h2>
<ul>
<li>General introduction to the workflow around go modules: <a href="https://blog.golang.org/using-go-modules">https://blog.golang.org/using-go-modules</a></li>
<li>The what and why around Go module proxies: <a href="https://arslan.io/2019/08/02/why-you-should-use-a-go-module-proxy">https://arslan.io/2019/08/02/why-you-should-use-a-go-module-proxy</a></li>
<li>About Go modules in general: <a href="https://golang.org/doc/go1.13#modules">https://golang.org/doc/go1.13#modules</a></li>
</ul>Erwin van EykSince the release of 1.13, Go has native Go Modules support, with go mod. Similar to other tooling in the Go ecosystem, like go fmt for code formatting, it is intended to be the default, community-concended approach to dependency management. It supersedes existing approaches, such as Glide and Dep, and integrates smoothly within the existing Go workflow.Beyond Microbenchmarks: The SPEC-RG Vision for a Comprehensive Serverless Benchmark2020-04-28T00:00:00+00:002020-04-28T00:00:00+00:00https://erwinvaneyk.nl/serverless-benchmark-vision<p><strong>Erwin van Eyk, Joel Scheuner, Simon Eismann, Cristina L. Abad, Alexandru Iosup - HotCloudPerf @ ICPE 2020</strong></p>
<p>As we are progressing <a href="/towards-a-serverless-benchmark">towards the serverless benchmark</a>, and as the plans are getting more concrete, we wrote a final vision on the subject.
In this article we focused on motivating the serverless benchmark that we have been focussing on:
why are serverless benchmarks needed;
why are existing benchmarks and performance studies are not sufficient;
and, what should be within and outside of the scope of this effort?</p>
<p>We further highlight the general approach, taking a structured approach to evaluating the performance of FaaS platforms.
For this we use our <a href="/faas-refarch">FaaS Reference Architecture</a>, basing the experiments and the metrics on the components that we identified.</p>
<p><strong>Abstract</strong> - Serverless computing services, such as Function-as-a-Service (FaaS), hold the attractive promise of a high level of abstraction and high performance, combined with the minimization of operational logic. Several large ecosystems of serverless platforms, both open- and closed-source, aim to realize this promise. Consequently, a lucrative market has emerged. However, the performance trade-offs of these systems are not well-understood. Moreover, it is exactly the high level of abstraction and the opaqueness of the operational-side that make performance evaluation studies of serverless platforms challenging. Learning from the history of IT platforms, we argue that a benchmark for serverless platforms could help address this challenge. We envision a comprehensive serverless benchmark, which we contrast to the narrow focus of prior work in this area. We argue that a comprehensive benchmark will need to take into account more than just runtime overhead, and include notions of cost, realistic workloads, more (open-source) platforms, and cloud integrations. Finally, we show through preliminary real-world experiments how such a benchmark can help compare the performance overhead when running a serverless workload on state-of-the-art platforms.</p>
<p><a href="https://atlarge-research.com/pdfs/HotCloudPerf_2020_benchmark_vision-final.pdf" class="btn btn--success btn--large">Publication</a></p>
<p><a href="../attachments/hotcloudperf-2020-beyond-microbenchmarks-slides.pdf" class="btn btn--success btn--large">Slides</a></p>Erwin van EykErwin van Eyk, Joel Scheuner, Simon Eismann, Cristina L. Abad, Alexandru Iosup - HotCloudPerf @ ICPE 2020Converting Kubernetes unstructured to typed objects2020-01-02T00:00:00+00:002020-01-02T00:00:00+00:00https://erwinvaneyk.nl/kubernetes-unstructured-to-typed<p>To interact with the Kubernetes API using the
<a href="https://github.com/kubernetes/client-go">client-go</a> library there are two
primary APIs: the typed <code class="highlighter-rouge">kubernetes.Interface</code> API and the unstructured
<code class="highlighter-rouge">dynamic.Interface</code> API.</p>
<p>Although using the core kubernetes API is (for Kubernetes)
<a href="https://github.com/kubernetes/client-go/tree/master/examples/create-update-delete-deployment">well-documented</a>,
the dynamic API has fewer examples. There are a couple of examples, these did
not show how more advanced examples how to work with the unstructured responses.</p>
<p>Because of the lack of documentation or examples, it took me some time to find
out the specific package/function to convert unstructured to a typed
object—which is why this post aims to document it for others (or at least my
future self).</p>
<p>The short anwer to converting <code class="highlighter-rouge">unstructured.Unstructure</code> to a typed resource
is to use the <code class="highlighter-rouge">runtime.UnstructuredConverter</code> interface. Generally, the
<code class="highlighter-rouge">runtime.DefaultUnstructuredConverter</code> implementation suffices for almost all
use cases.</p>
<p>A full example which interacts with the Cluster CRD from
<a href="https://cluster-api.sigs.k8s.io">Cluster API</a>:</p>
<figure class="highlight"><pre><code class="language-go" data-lang="go"><span class="k">package</span> <span class="n">main</span>
<span class="k">import</span> <span class="p">(</span>
<span class="s">"fmt"</span>
<span class="n">metav1</span> <span class="s">"k8s.io/apimachinery/pkg/apis/meta/v1"</span>
<span class="s">"k8s.io/apimachinery/pkg/runtime"</span>
<span class="s">"k8s.io/client-go/dynamic"</span>
<span class="s">"k8s.io/client-go/rest"</span>
<span class="s">"sigs.k8s.io/cluster-api/api/v1alpha2"</span>
<span class="p">)</span>
<span class="k">func</span> <span class="n">main</span><span class="p">()</span> <span class="p">{</span>
<span class="c">// Create a new dynamic client.</span>
<span class="n">restConfig</span><span class="p">,</span> <span class="n">err</span> <span class="o">:=</span> <span class="n">clientcmd</span><span class="o">.</span><span class="n">BuildConfigFromFlags</span><span class="p">(</span><span class="s">""</span><span class="p">,</span> <span class="s">""</span><span class="p">)</span>
<span class="n">assertNoError</span><span class="p">(</span><span class="n">err</span><span class="p">)</span>
<span class="n">kubeClient</span><span class="p">,</span> <span class="n">err</span> <span class="o">:=</span> <span class="n">dynamic</span><span class="o">.</span><span class="n">NewForConfig</span><span class="p">(</span><span class="n">restConfig</span><span class="p">)</span>
<span class="n">assertNoError</span><span class="p">(</span><span class="n">err</span><span class="p">)</span>
<span class="c">// Get a resource (returns an unstructured object).</span>
<span class="n">resourceScheme</span> <span class="o">:=</span> <span class="n">v1alpha2</span><span class="o">.</span><span class="n">SchemeBuilder</span><span class="o">.</span>
<span class="n">GroupVersion</span><span class="o">.</span><span class="n">WithResource</span><span class="p">(</span><span class="s">"cluster"</span><span class="p">)</span>
<span class="n">resp</span><span class="p">,</span> <span class="n">err</span> <span class="o">:=</span> <span class="n">kubeClient</span><span class="o">.</span><span class="n">Resource</span><span class="p">(</span><span class="n">resourceScheme</span><span class="p">)</span><span class="o">.</span>
<span class="n">Namespace</span><span class="p">(</span><span class="n">namespace</span><span class="p">)</span><span class="o">.</span>
<span class="n">Get</span><span class="p">(</span><span class="n">name</span><span class="p">,</span> <span class="n">metav1</span><span class="o">.</span><span class="n">GetOptions</span><span class="p">{})</span>
<span class="n">assertNoError</span><span class="p">(</span><span class="n">err</span><span class="p">)</span>
<span class="c">// Convert the unstructured object to cluster.</span>
<span class="n">unstructured</span> <span class="o">:=</span> <span class="n">resp</span><span class="o">.</span><span class="n">UnstructuredContent</span><span class="p">()</span>
<span class="k">var</span> <span class="n">cluster</span> <span class="n">v1alpha2</span><span class="o">.</span><span class="n">Cluster</span>
<span class="n">err</span> <span class="o">=</span> <span class="n">runtime</span><span class="o">.</span><span class="n">DefaultUnstructuredConverter</span><span class="o">.</span>
<span class="n">FromUnstructured</span><span class="p">(</span><span class="n">unstructured</span><span class="p">,</span> <span class="o">&</span><span class="n">cluster</span><span class="p">)</span>
<span class="n">assertNoError</span><span class="p">(</span><span class="n">err</span><span class="p">)</span>
<span class="c">// Use the typed object.</span>
<span class="n">fmt</span><span class="o">.</span><span class="n">Println</span><span class="p">(</span><span class="n">cluster</span><span class="o">.</span><span class="n">Status</span><span class="o">.</span><span class="n">Phase</span><span class="p">)</span>
<span class="p">}</span>
<span class="k">func</span> <span class="n">assertNoError</span><span class="p">(</span><span class="n">err</span> <span class="kt">error</span><span class="p">)</span> <span class="p">{</span>
<span class="k">if</span> <span class="n">err</span> <span class="o">!=</span> <span class="no">nil</span> <span class="p">{</span>
<span class="nb">panic</span><span class="p">(</span><span class="n">err</span><span class="p">)</span>
<span class="p">}</span>
<span class="p">}</span></code></pre></figure>Erwin van EykTo interact with the Kubernetes API using the client-go library there are two primary APIs: the typed kubernetes.Interface API and the unstructured dynamic.Interface API.The SPEC-RG Reference Architecture for FaaS: From Microservices and Containers to Serverless Platforms2019-11-12T00:00:00+00:002019-11-12T00:00:00+00:00https://erwinvaneyk.nl/faas-refarch<p><strong>Erwin van Eyk, Johannes Grohmann, Simon Eismann, Andre Bauer, Laurens Versluis, Lucian Toader, Norbert Schmitt, Nikolas Herbst, Cristina L. Abad, Alexandru Iosup - IEEE Internet Computing, 2019</strong></p>
<p>A key milestone <a href="/towards-a-serverless-benchmark">towards the serverless benchmark</a> that we are working on in the SPEC RG Cloud group, our findings on the architectures of FaaS platforms and related systems got published in IEEE Internet Computing.
The key contribution in this paper is a reference architecture for these FaaS platforms, which has been the result of a lengthy survey of the FaaS part of the <a href="https://github.com/cncf/wg-serverless">serverless landscape</a>.
Although (unsurpisingly) some results are already a bit outdated since we invetigated the FaaS platforms—AWS released more details about the internal architecture of Lambda, and Knative substantially improved—the contributions of the paper remain valuable.</p>
<p><strong>Abstract</strong> - Microservices, containers, and serverless computing belong to a trend toward applications composed of many small, self-contained, and automatically managed components. Core to serverless computing, Function-as-a-Service (FaaS) platforms employ state-of-the-art container technology and microservices-based architectures to enable users to manage complex applications without the need for systems-level expertise. Victim of its own success, and partially due to proprietary technology, currently the community has a limited overview of these platforms. To address this, we propose a reference architecture and ecosystem for FaaS platforms. Based on a year-long survey of real-world platforms conducted within the SPEC-RG Cloud Group, we highlight specific components and identify common operational patterns.</p>
<p><a href="https://atlarge-research.com/pdfs/spec-rg-referece-architecture-for-faas-2019.pdf" class="btn btn--success btn--large">Publication</a></p>
<p><em>Note: I am planning to write a more in-depth blog post on this topic. Stay tuned.</em></p>Erwin van EykErwin van Eyk, Johannes Grohmann, Simon Eismann, Andre Bauer, Laurens Versluis, Lucian Toader, Norbert Schmitt, Nikolas Herbst, Cristina L. Abad, Alexandru Iosup - IEEE Internet Computing, 2019Accessing the docker-for-mac network from a browser: the fast and dirty way2019-09-28T00:00:00+00:002019-09-28T00:00:00+00:00https://erwinvaneyk.nl/docker-for-mac-network-from-a-browser<p>For a authentication-related prototype I needed to be able to access some
services running in containers. Unfortunately Docker runs in a VM on MacOS, and
the Docker network is network is not bridged. This prevents us from accessing
the container IPs directly.</p>
<p>Of course, there are many options to expose/proxy specific container ports to
the host, such as, using port mapping in Docker or a reverse proxy alongside the
target container.</p>
<p>However, in this project there was an additional constraint: throughout the
project the services assumed (and required!) that the user—or more accurately,
the browser—is able to access the exact same IP/DNS name of the containers as
the containers themselves. The common solutions are based on proxying, which
alters the address. Also, I wanted to avoid the need to modify the actual
services, which would be time-consuming and pointless for a prototype. Finally,
I wanted to avoid any external dependency on a DNS server, to avoid
overcomplicating the setup—because the issue is related to a MacOS setup.</p>
<p>So, I decided to solve it using the quickest and hackiest way I could think of,
which is to simply deploy and expose Chrome in a Docker container. Because it is
deployed as container, it does have access to the Docker network and therefore
the container IPs.</p>
<p>The initial setup requires a bit of work. First, besides Docker, you will need
xquartz (for a X Window system on MacOS) and socat (for hooking the Chrome
container to xquartz).</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell">brew <span class="nb">install </span>socat
brew cask <span class="nb">install </span>xquartz</code></pre></figure>
<p>After installing, you will need to reboot your machine.</p>
<p>Now, to run Chrome in a container, first, setup a stream between the Chrome
container and the XQuartz server:</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell">socat TCP-LISTEN:6000,reuseaddr,fork UNIX-CLIENT:<span class="se">\"</span><span class="nv">$DISPLAY</span><span class="se">\"</span></code></pre></figure>
<p>In a different shell, run the Chrome container:</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell">docker run <span class="se">\</span>
<span class="nt">--rm</span> <span class="se">\</span>
<span class="nt">--name</span> chrome <span class="se">\</span>
<span class="nt">--net</span> host <span class="se">\</span>
<span class="nt">--volume</span> <span class="s2">"</span><span class="k">${</span><span class="nv">HOME</span><span class="k">}</span><span class="s2">/Downloads:/root/Downloads"</span> <span class="se">\</span>
<span class="nt">--volume</span> <span class="s2">"</span><span class="k">${</span><span class="nv">HOME</span><span class="k">}</span><span class="s2">/.config/google-chrome/:/data"</span> <span class="se">\</span>
<span class="nt">--security-opt</span> seccomp:unconfined <span class="se">\</span>
<span class="nt">--env</span> <span class="s2">"DISPLAY=</span><span class="si">$(</span>ifconfig en0 | <span class="nb">grep </span>inet | <span class="nb">awk</span> <span class="s1">'$1=="inet" {print $2}'</span><span class="si">)</span><span class="s2">:0"</span> <span class="se">\</span>
jess/chrome</code></pre></figure>
<p>Chrome should now launch in a XQuartz window.</p>
<p>I found that responsiveness of the window varies quite a bit depends on the
display setup. The responsiveness varies from acceptable to terrible. One way to
speed up the rendering is to change the color output to <code class="highlighter-rouge">256 Colors</code> in the
preferences of XQuartz.</p>
<p>That said, this is just the quickest solution that I came up with. There might
be (and probably is) an easier solution that I am not aware of. If you know a
faster alternative, let me know!</p>
<h3 id="further-reading">Further reading</h3>
<ul>
<li><a href="https://cntnr.io/running-guis-with-docker-on-mac-os-x-a14df6a76efc">https://cntnr.io/running-guis-with-docker-on-mac-os-x-a14df6a76efc</a></li>
<li><a href="https://github.com/dunckr/chrome-docker-mac">https://github.com/dunckr/chrome-docker-mac</a></li>
</ul>Erwin van EykFor a authentication-related prototype I needed to be able to access some services running in containers. Unfortunately Docker runs in a VM on MacOS, and the Docker network is network is not bridged. This prevents us from accessing the container IPs directly. Of course, there are many options to expose/proxy specific container ports to the host, such as, using port mapping in Docker or a reverse proxy alongside the target container.Fixing kubectl autocompletion for an alias in Zsh2019-08-05T00:00:00+00:002019-08-05T00:00:00+00:00https://erwinvaneyk.nl/kubectl-alias-in-zsh<p>With the upcoming switch in MacOS to use Zsh rather than Bash, I decided to
Zsh a shot as well for my setup. The setup (Zsh + oh-my-zsh) has been great
so far. However, autocompletion for my alias of kubectl (<code class="highlighter-rouge">k</code>) using the
common way to setup aliases with autocompletion caused a segfault in the shell.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nb">alias </span><span class="nv">k</span><span class="o">=</span><span class="s2">"kubectl"</span>
<span class="nb">alias </span>compdef <span class="nv">k</span><span class="o">=</span><span class="s2">"kubectl"</span></code></pre></figure>
<p>Since probably many people hit this issue at some point, I figured I’d share
the workaround that I am using to have a kubectl alias with autocompletion.</p>
<p>So, to work around this issue, I sourced the autocompletion of kubectl
manually, replacing the kubectl command with the <code class="highlighter-rouge">k</code> alias:</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nb">alias </span><span class="nv">k</span><span class="o">=</span><span class="s2">"kubectl"</span>
<span class="nb">source</span> <<span class="o">(</span>kubectl completion zsh | <span class="nb">sed</span> <span class="s1">'s/kubectl/k/g'</span><span class="o">)</span></code></pre></figure>Erwin van EykWith the upcoming switch in MacOS to use Zsh rather than Bash, I decided to Zsh a shot as well for my setup. The setup (Zsh + oh-my-zsh) has been great so far. However, autocompletion for my alias of kubectl (k) using the common way to setup aliases with autocompletion caused a segfault in the shell.The Design, Productization, and Evaluation of a Serverless Workflow-Management System2019-06-21T00:00:00+00:002019-06-21T00:00:00+00:00https://erwinvaneyk.nl/thesis<p><strong>Abstract</strong>
The need for accessible and cost-effective IT resources has led to the
near-universal adoption of cloud computing. Within cloud computing,
serverless computing has emerged as a model that further abstracts away
operational complexity of heterogeneous cloud resources. Central to this form of
computing is Function-as-a-Service (FaaS); a cloud model that enables
users to express applications as functions, further decoupling the application
logic from the hardware and other operational concerns. Although FaaS has seen
rapid adoption for simple use cases, there are several issues that impede its
use for more complex use cases. A key issue is the lack of systems that
facilitate the reuse of existing functions to create more complex, composed
functions. Current approaches for serverless function composition are either
proprietary, resource inefficient, unreliable, or do not scale. To address this
issue, we propose an approach orchestrate composed functions using reliably and
efficiently with workflows. As a prototype, we design and implement
Fission Workflows: an open-source serverless workflow system which leverages the
characteristics of serverless functions to improve the (re)usability,
performance, and reliability of function compositions. We evaluate our prototype
using both synthetic and real-world experiments, which show that the system is
comparable with or better than state-of-the-art workflow systems, while costing
significantly less. Based on the experimental evaluation and the industry
interest in the Fission Workflows product, we believe that serverless workflow
orchestration will enable the use of serverless applications for more complex
use cases.</p>
<p><a href="../attachments/thesis-embargo-availablelater.pdf" class="btn btn--success btn--large">Thesis</a>
<a href="../attachments/thesis-slides.pdf" class="btn btn--success btn--large">Slides</a></p>Erwin van EykAbstract The need for accessible and cost-effective IT resources has led to the near-universal adoption of cloud computing. Within cloud computing, serverless computing has emerged as a model that further abstracts away operational complexity of heterogeneous cloud resources. Central to this form of computing is Function-as-a-Service (FaaS); a cloud model that enables users to express applications as functions, further decoupling the application logic from the hardware and other operational concerns. Although FaaS has seen rapid adoption for simple use cases, there are several issues that impede its use for more complex use cases. A key issue is the lack of systems that facilitate the reuse of existing functions to create more complex, composed functions. Current approaches for serverless function composition are either proprietary, resource inefficient, unreliable, or do not scale. To address this issue, we propose an approach orchestrate composed functions using reliably and efficiently with workflows. As a prototype, we design and implement Fission Workflows: an open-source serverless workflow system which leverages the characteristics of serverless functions to improve the (re)usability, performance, and reliability of function compositions. We evaluate our prototype using both synthetic and real-world experiments, which show that the system is comparable with or better than state-of-the-art workflow systems, while costing significantly less. Based on the experimental evaluation and the industry interest in the Fission Workflows product, we believe that serverless workflow orchestration will enable the use of serverless applications for more complex use cases.Talk: Serverless Operations: From Dev to Production @ Kubecon EU 20192019-05-23T00:00:00+00:002019-05-23T00:00:00+00:00https://erwinvaneyk.nl/kubecon-europe-2019-serverless-operations<p><strong>Abstract</strong>
FaaS functions on Kubernetes are increasingly popular. We often talk about the developer productivity advantages, such as the time to create a useful application from scratch without learning a lot about Kubernetes. In this talk we will focus on the operational aspects of serverless applications on Kubernetes.</p>
<p>What does it take to use serverless functions in Production, with safety, and at scale?</p>
<p>This talk covers 6 specific approaches, patterns and best practices that you can use with any FaaS/Serverless framework. These practices are geared towards improving quality, reducing risk, optimizing costs, and generally moving you closer towards production-readiness with serverless systems.</p>
<p><a href="../attachments/kubecon-europe-2019-going-faaster.pdf" class="btn btn--success btn--large">Slides</a>
<a href="https://www.youtube.com/watch?v=5ftE6LBdkGY" class="btn btn--success btn--large">Recording</a></p>Erwin van EykAbstract FaaS functions on Kubernetes are increasingly popular. We often talk about the developer productivity advantages, such as the time to create a useful application from scratch without learning a lot about Kubernetes. In this talk we will focus on the operational aspects of serverless applications on Kubernetes.Everything you need to know about serverless: What does the future hold?2019-05-16T00:00:00+00:002019-05-16T00:00:00+00:00https://erwinvaneyk.nl/interview-serverless-future<p>Recently, I was interviewed regarding my talk at JAX DevOps: <a href="https://jaxenter.com/servereless-interview-van-eyk-158600.html">Everything you need to know about serverless: What does the future hold?</a></p>Erwin van EykRecently, I was interviewed regarding my talk at JAX DevOps: Everything you need to know about serverless: What does the future hold?