var counter int64
func Run(done chan struct{}) {
go func() {
if counter >= 1000 {
done <- struct{}{}
time.Sleep(500000)
} else {
atomic.AddInt64(&counter, 1)
Run(done)
}
}()
}
func main() {
done := make(chan struct{})
Run(done)
<-done
fmt.Printf("Counter=%v Goroutines=%v\n", counter, runtime.NumGoroutine())
}
Options:
A) 1001
B) 1000
C) 2
D) 1
E) System Crash
The answer is C, 2 goroutines. Bonus points if you noticed that the way the counter is being read is not concurrency safe.
At first glance, you may think that goroutines that spawn other goroutines would have their lifetimes linked in some way, such as the each parent being linked to the lifetimes of their children. This is an easy mistake to make given the apparent recursive nature of the code above. However, each goroutine is actually entirely independent of one another, and each one will terminate when they have no further instructions to execute.
In the example above, each goroutine will spawn a child goroutine and almost immediately terminate. The final goroutine will send a response to the main goroutine via a channel and sleep, preventing it from terminating.
Now what would happen if we modified the code to look like this?
var counter int64
func Run() {
go func() {
if counter >= 1000 {
time.Sleep(500000)
} else {
atomic.AddInt64(&counter, 1)
Run()
}
}()
}
func main() {
Run()
fmt.Printf("Counter=%v Goroutines=%v\n", counter, runtime.NumGoroutine())
}
Answer: the program would terminate and print a warning. The main goroutine will terminate as soon as there are no instructions for it to execute, which happens after the Printf statement. Without the channel for synchronization, the main goroutine has no reason to wait for the completion of our ‘recursive’ goroutine spawning.
Since parent and child goroutines don’t have their lifecycles linked you can leverage this behavior in powerful ways. For example, we can have an asynchronous job worker restart itself when a panic occurs!
type Worker struct {
job func() error
result chan error
}
func (w *Worker) Start() {
go w.Run()
}
func (w *Worker) Run() {
fmt.Printf("Running...\n")
defer func() {
if err := recover(); err != nil {
fmt.Printf("Panic=%v\n", err)
go w.Run()
}
}()
w.result <- w.job()
fmt.Printf("Done!\n")
}
func main() {
var hasFailed bool
worker := &Worker{
job: func() error {
if !hasFailed {
hasFailed = true
panic(fmt.Errorf("Oh dear!"))
}
fmt.Printf("hard job complete!\n")
return nil
},
result: make(chan error),
}
worker.Start()
result := <-worker.result
fmt.Printf("Result=%v\n", result)
}
In the above code, we create a worker that will panic on the first attempt to run the job. The initial Run goroutine captures the panic and simply spawns a new goroutine to replace the original failed goroutine. The output of the above code will look something like this (the Done! statement may not be printed):
Running...
Panic=Oh dear!
Running...
hard job complete!
Done!
Result=<nil>
If you have a worker that processes a variety of jobs, then having the worker respawn itself can be a simpler solution than having an orchestrator track worker terminations via channels and respawn the failed worker.
All goroutines run and terminate independently of one another. You can leverage this independent behavior to do interesting and bizarre looking things, such as ‘recursively’ spawning goroutines without breaking the call stack. If you care about orchestrating the lifecycles of goroutines, then you’ll need to use an orchestration primitive like a channel, mutex or waitgroup.
Credit goes to Vadim Uchitel for the goroutine worker restart idea that prompted this article.
]]>While the basic premise of Functions as a Service is pretty straightforward, it can be easy to draw incorrect assumptions about the lifecycle of a single invoked function. Recently we encountered some HTTP connection reset issues on one of our Go Lambda functions, which was a great excuse for me to dig into container reuse in Amazon Lambdas in order to debug and resolve the underlying issue.
This article will focus on two main areas:
Chances are if you’ve ever searched for documentation regarding Amazon Lambda container reuse, you’ll have stumbled on this article. The article uses Node.js as the language example of chocie, and highlights a few key points:
However, if you haven’t changed the code and not too much time has gone by, Lambda may reuse the previous container. You should never depend on container reuse for proper execution of some code. Tasks that must complete with each invocation should be bound to the handler function in some way.
Our Go lambda function would respond to infrequent SNS events. These events are fairly spread out and don’t generally overlap in their arrival (arriving every few minutes). In practice, we saw that the same container could be reused for around 40 minutes, or potentially as long as several hours. To verify this, we included a random identifier that was printed with each handler function invocation (likewise, all container messages go to the same log group).
These reuse patterns are likely specific to our traffic load, but the fact that the same lambda could be reused for 40+ minutes (or even hours) is pretty eye opening.
The Amazon article mentioned above focuses on Node.js. In dynamic/interpreted languages, the reuse model is fairly intuitive: you create a handler/main type function and Amazon can invoke that specific function when a new request comes in.
When using a complied language like Golang, it’s unclear exactly how your handler code can be reused. Initially I assumed that the executable you uploaded would be invoked each time leading to a fresh state on each invocation, even if the underlying container was reused.
Under the hood, the aws-lambda-go library accepts a Handler function passed to the lambda.Start function. The lambda.Start method actually starts a TCP server that listens for new events (such as SNS or Web requests) and passes those along to the Handler method you provide to the Start method.
func Start(handler interface{}) {
wrappedHandler := NewHandler(handler)
StartHandler(wrappedHandler)
}
func StartHandler(handler Handler) {
port := os.Getenv("_LAMBDA_SERVER_PORT")
lis, err := net.Listen("tcp", "localhost:"+port)
if err != nil {
log.Fatal(err)
}
function := new(Function)
function.handler = handler
err = rpc.Register(function)
if err != nil {
log.Fatal("failed to register handler function")
}
rpc.Accept(lis)
log.Fatal("accept should not have returned")
}
The code can be seen here.
The key takeaway here is that your executable is only invoked once per creation of a container, and any resources created and scoped to your handler function’s execution will be recreated each time the container is invoked and reused. Any resources created with a scope outside of your handler function will potentially be reused between invocations, regardless how how long the container has been frozen.
This behavior is great as it allows you to pool/reuse connections and other resources, but it does require you to be mindful of connection timeouts and the like, as resources may sit frozen for an undefined period of time. You can read more about the Execution Context of lambdas for greater detail.
In hindsight, this behavior makes perfect sense: re-using containers is a great optimization for both Amazon and developers, but if you’re newer to Lambda functions this behavior may not be intuitive (especially when uploading an executable, like with Go).
Now that we understand how Go Lambda containers can be reused, let’s dive further into the connection reset issue. The behavior we were witnessing was that our HTTP request to a remote server would eventually fail due to a connection reset issue. In our application, the net/http.Client was created once and reused with each invocation of the handler function. The http.Client instance in question used a custom http.Transport object, rather than the net/http.DefaultTransport object.
As it turned out, the net/http Transport uses connection pooling/caching under the hood. This means if you’re periodically making requests to the same server, like we were, the Transport will keep around the underlying connection and reuse it on the next request. Connections that aren’t frequently reused or exhibit errors may be closed by the Transport.
In retrospect the connection pooling makes perfect sense, and this behavior is probably implemented in most standard http libraries for programming languages or third party libraries. For example, Node.js appears to pool/reuse connections as well. However, if you’re unaware of this behavior you may assume a fresh HTTP Connection is used with each request, leading you to draw incorrect conclusions about the source of connection errors.
Our HTTP Client was using a http.Transport that looked something like this. Can you spot the error?
var Transport = &http.Transport{
DialContext: &net.Dialer{
Timeout: 5 * time.Second,
},
TLSHandshakeTimeout: 5 * time.Second,
}
Again, let’s look at the http.Transport documentation.
Our configuration specifies some timeouts which are a great reliability practice, but as it turns out there’s a large number of Transport fields not defined in this configuration. In Go, variables/fields are initialied with their zero value if no other value is specified. Strings have a zero value of "", bools default to false, numeric types default to a zero value of 0, and pointer types have a zero value of nil.
With this in mind, let’s look at some of the Fields not defined on the above Transport struct:
In this case, the lack of an IdleConnTimeout meant that our http.Client would indefinitely try to maintain a connection to the upstream server. The upstream server would close this connection after a few minutes, and a subsequent lambda invocation with the same container would encounter a connection reset error when trying to reuse this closed connection.
To verify this diagnosis, we started by setting DisableKeepAlives to true to ensure that the error was resolved when connection reuse was completely disabled. Once this solution was verified we went ahead and provided some sane default values for the other Transport configuration values.
Overall, I like zero values in Golang, but this does feel like an error prone way to design the http.Transport API. If the Transport struct adds another field in a newer version of the language any existing explicit initalization of a net.Transport instance would use the zero value for that field, potentially resulting in an no limit value. Not having explicit limits/timeouts is a stablity anti-pattern and could lead to outages in the future.
This is definitely a case where more careful reading of the documentation on our part could have prevented this error, but it is nice when APIs/libraries make it harder for you to do the wrong thing without explicit intent/acknowledgement by the developer (ex: dangerouslySetInnerHTML in React).
My previous experience with Lambdas was pretty limited, so debugging this issue was a great learning experience and excuse to dig into lambda internals for Go (and other languages) as well as the net/http Client. Hopefully this experience is insightful for others as well.
Where we misstepped:
Key Takeaways:
I recently started learning the Go Programming Language and decided to build a small project to better explore the language. After a bit of deliberation, I settled on writing a URL Shortening Service that leverages Redis as it’s backing datastore (based on an exercise from 7 Databases in 7 Weeks). My hope was that by picking a simple project I could better focus on leveraging the Go Tools, Go Testing, and learn more about Go best practices. The basic URL Shortener only needs 3 routes:
This article is divided into sections describing each of my coding sessions. I entered each session with a few fixed goals and try to outline my different takeaways and lessons learned. I’m not a Go Expert, so if you see instances of unidiomatic code, please let me know below!
Lastly, you can find the project source code on Gitlab.
My main goals for this session was to get the project in a state where it compiles, passes a dummy test and integrates with Gitlab CI to verify the Build/Test stages.
The Go language comes with a number of excellent utilities out of the box that made Docker and CI integration pretty straightfoward for someone new to the language. My initial thought was to try leveraging the go build and go test commands in different stages of the .gitlab-ci.yml file and see if that would be sufficient. I decided that I’d go ahead and create the basic project structure and add a dummy entrypoint and test that the CI/CD could run.
I began by creating a new directory in my GOPATH for the project, initializing the project repository and creating go-short.go, seen below:
package main
import (
"fmt"
"os"
)
// Hello returns a formatted Welcome Message
func Hello(target string) string {
return fmt.Sprintf("Hello, %s!", target)
}
func main() {
target := "World"
if len(os.Args) >= 2 {
target = os.Args[1]
}
fmt.Println(Hello(target))
}
After verifying that the project could be built using go build, I went ahead and added a simple test in the file go-short_test.go:
package main
import (
"strings"
"testing"
)
func TestHello(t *testing.T) {
var tests = []string{
"Hello", "World", "John", "Jane", "abc",
}
for _, input := range tests {
if !strings.Contains(Hello(input), input) {
t.Errorf("Hello(%q) contains %v, failed", input, input)
}
}
}
For this dummy test case I implemented a few testing guidelines outlined in the Go Programming Language book. I used table-driven testing, in which multiple test cases are specified as a slice of structs or values and evaluated in a loop. Additionally, I leverged t.Errorf to report failures, rather than t.Fatal, as t.Error will mark the test case failure without terminating execution of the test. This allows us to review all possible test failures for a single table-driven test, rather than aborting at the first error. I also used an expressive error formatting message that explicitly states the expectation that failed and the corresponding parameters. Finally, in order to make the test more robust, I checked for the inclusion of a given substring, rather than evaluating the entire returned string. This way any minor tweaks to the Hello function wouldn’t invalidate our test cases completely.
Next I verified these tests ran and passed by running go test. As always, if you aren’t writing the test prior to implementing the function body you should force the test to fail to verify the test is making the correct assertions (ex: if strings.Contains(Hello(input), input)), or break the Hello function implementation).
With a basic program and test in place, we can finally add a .gitlab-ci.yml file to integrate CI/CD. This ended up being very simple:
image: golang:1.11
stages:
- build
- test
build:
stage: build
script:
- go build
test:
stage: test
script:
- go test
As a bonus, I also went ahead and added a Dockerfile based on the official image:
FROM golang:1.11
WORKDIR /go/src/gitlab.com/cfilby/go-short
COPY . .
RUN go install
EXPOSE 8080
CMD ["go-short"]
With these files in place, the last step was to commit and push the project to Gitlab, completing the development session.
Takeaways:
I embarked on this session with the goal of integrating the new Go Module system, adding a few third party dependencies, integrating Redis and implementing the URL shortening functionality. I also wanted to add more tests for more complex functionality.
It looks like you can only initialize Go Modules outside of the GOPATH, so I moved the project to another directory and ran go mod init gitlab.com/bindersfullofcode/go-short. Running the command generates a go.mod and go.sum file for tracking project dependencies.
With the Go Module setup, all that’s left to do is add some dependencies and get coding! Go Module will save any dependencies you go get from the project directory to the go.mod and go.sum files. For this project I opted to use go-redis/redis for ‘persistence’ and teris-io/shortid to generate the IDs that map to full length URLs.
go get -u github.com/go-redis/redis
go get -u github.com/teris-io/shortid
Given that this URL shortener is a simple proof of concept, I decided to store the URL ShortID as a key in Redis and have the corresponding value be the URL. A more robust implementation would store statistics, include a timeout, and more.
To implement this in code we need to use the Redis client to execute the following commands:
// Save the ShortID and URL
SET abcdef http://google.com
OK
// Fetch a URL for a ShortID
GET abcdef
http://google.com
// Check if the ShortID already exists to avoid overwritting
EXISTS abcdef
1
My initial thought was to create a generic key/value Storage Interface for the rest of the application to leverage in order to abstract away the underlying use of Redis.
// Storage Exposes a basic Key/Value Storage Interface
type Storage interface {
Get(key string) (string, error)
Set(key string, value interface{}) error
Exists(key string) (bool, error)
}
The RedisStorage struct that conforms to this interface was fairly simple to implement. For the sake of brevity, I’ll only include the Exists method below, but you can see the rest of the code on Gitlab:
// Exists checks if the specified key is in use
// Returns false when an error occurs.
func (r *RedisStorage) Exists(key string) (bool, error) {
exists, err := r.client.Exists(key).Result()
if err != nil {
return false, err
}
return exists == 1, nil
}
With the Storage Interface and Redis implementation in place, I swapped my focus to the URL Shortening code. Using the Storage interface means we could easily swap out the underlying implementation for testing or for a different database altogether.
Again, I created an interface to wrap the URLShortening functionality (this should make testing components that use the Shortener easier):
// Shortener is the interface that wraps UrlShortening operations
type Shortener interface {
Shorten(url string) (string, error)
Get(key string) (string, error)
}
The implementation for the Shorten function is particularly interesting as it needs to generate an id, check if that id is already in use, and store the new shortened URL and report any errors:
// Shorten creates a shortened URL Key and persists the key to the underlying store
// Returns the shortened url key if successful
func (u *Store) Shorten(url string) (string, error) {
id, err := generateID()
if err != nil {
return "", fmt.Errorf("generating id for %s: %v", url, err)
}
exists, err := u.storage.Exists(id)
if err != nil {
return "", fmt.Errorf("checking existance of key %s for url %s: %v", id, url, err)
} else if exists {
return "", fmt.Errorf("key %s for url %s already exists in storage", id, url)
}
if err := u.storage.Set(id, url); err != nil {
return "", fmt.Errorf("storing key %s for url %s: %v", id, url, err)
}
return id, nil
}
The generateID function is a package level variable to allow for easy swapping of the implementation during testing (seen below).
The final step was to create some tests to verify the functionality and error handling of the Get and Shorten functions. Below you can see the TestShortenGenerateID function that stubs out the generateID function for testing purposes (and eventually restores it):
// This test is pretty clunky, but I wanted to try my hand at testing external dependencies
// as outlined in The Go Programming Language 11.2.3
// TODO: Complete Testing of Shorten
func TestShortenGenerateID(t *testing.T) {
tests := []struct {
key string
generateID func() (string, error)
shouldError bool
}{
{"should succeed", func() (string, error) { return "KEY", nil }, false},
{"should error", func() (string, error) { return "", errors.New("Error") }, true},
}
idGenerator := generateID
for _, test := range tests {
store := createURLStorage(nil)
generateID = test.generateID
_, err := store.Shorten(test.key)
if test.shouldError && err == nil {
t.Errorf("store.Shorten(%s) expected error, found none", test.key)
}
if !test.shouldError && err != nil {
t.Errorf("store.Shorten(%s) found error, expected none: %v", test.key, err)
}
}
generateID = idGenerator
}
Note: The createURLStorage(map[string]string) function creates a fakeStorage instance backed by a map.
I felt that this coding session definitely exposed me to some areas I need to study further in Go. While I’m content with the Shortener and Storage abstractions, I think that I should have used finer grained interfaces to simplify testing and make the Storage more generalizable. Likewise, my current tests feel a bit clunky, especially when it comes to testing error paths.
Takeaways:
Next up: integrate net/http!
The goal of this session was to create the HTTP request handlers and wire up the URL Shortening functionality. Our HTTP Handlers need to respond to three types of requests: showing the Home Page, creating a URL Redirect and responding to a URL Redirect request.
These types of requests correspond to the following three REST Methods:
The go standard library includes the net/http and html/template packages, both of which are great starting points for a basic server rendered pages project. If our project were more complex, I would go ahead and integrate gorilla/mux for the cleaner routing abstraction and middleware support, but we can make do without it for now.
We’ll start by rewriting the go-short.go entry point to start a web server.
func main() {
mux := http.NewServeMux()
web.RegisterHandlers(mux)
if err := http.ListenAndServe(":8080", mux); err != nil {
log.Fatalf("unable to start server: %v\n", err)
}
}
We begin by creating a new ServeMux that we’ll register our request handlers on. net/http includes a number of methods to register request handlers (such as http.HandlerFunc(pattern string, handler func(ResponseWriter, *Request))) which register handlers onto the http.DefaultServeMux. The ListenAndServe function starts the webserver and accepts an http.Handler which is responsible for handling incoming requests. When the handler argument is nil, the DefaultServeMux is used. In general, creating a ServeMux is preferred as using the DefaultServeMux can allow third party packages to unexpectedly register handlers onto your webserver. Alex Edwards explains go routing and the DefaultMux in greater detail (here)[https://www.alexedwards.net/blog/a-recap-of-request-handling]. Lastly, we create a function in our shortner package that accepts a ServeMux and registers the request handlers.
The go standard library includes both a (text/template)[https://golang.org/pkg/text/template/] and (html/template)[https://golang.org/pkg/html/template/] that implement the same interface interface. The Template interface allows you to load/parse individual templates, or load them all via a pattern or variadic argument list and reference them by name. Seeing how we have two templates, I’ve opted to load all templates matching the views/*.html glob.
The views/ folder contains two templates, home.html and created.html
var tmpl = template.Must(template.ParseGlob("views/*.html"))
When rendering a page, we must provide the an io.Writer to write the rendered template to, the name of the desired template, and the context to inject into the template. For simplicity I’ve created the following helper function in handlers.go.
func renderPage(w http.ResponseWriter, template string, content interface{}) {
if err := tmpl.ExecuteTemplate(w, template, content); err != nil {
http.Error(w, fmt.Sprintf("rendering template: %v", err), http.StatusInternalServerError)
}
}
The template syntax itself has vague similarities to the Handlebars templating language: {{ }} is used to wrap actions. You can reference properties on the context/data struct using the cursor (a dot/period). Ex: {{ .URL }} would correspond to the URL field of the following data struct type Data struct { URL string }.
You can see the content of created.html below:
<!DOCTYPE html>
<html lang="en">
<head>
<title>URL Shortener</title>
</head>
<body>
<h2>Go Short!</h2>
<p>New URL Created!</p>
<a href="/{{.ShortID}}">{{.ShortID}}</a>
</body>
</html>
This templating library is fairly expressive and allows for conditional statements, loops and the nesting/combining of various several templates into a single rendered view. For simplicity I repeated the whole document structure in both templates, but a more complete implementation would share a common site layout.
Additional Resources:
We’ll have a centralized router for our HTTP Requests that looks like the following:
func handleHomePage(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/" {
if r.Method == http.MethodPost {
createShortenedURL(w, r)
} else if r.Method == http.MethodGet {
renderPage(w, "home.html", nil)
}
} else if r.Method == http.MethodGet {
handleRedirect(w, r)
}
}
When GET - / is invoked, we’ll render the home page. When POST -/ is invoked we’ll create a new redirect (if possible) and render the created page. Lastly, if a non-explicit path is provided we’ll attempt to perform a redirect. Using gorilla/mux would provide a much more robust path matching solution, but this basic router should serve our purposes.
The handlers all leverage a package level urlShortener object var urlShortener = shortener.New(storage.New("localhost:6379")).
The createShortenedURL handler must parse the request body, validate the arguments, create a new URL and render the created page. It also renders appropriate status codes where possible:
func createShortenedURL(w http.ResponseWriter, r *http.Request) {
if err := r.ParseForm(); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
if r.Form["url"] == nil || len(r.Form["url"]) == 0 {
http.Error(w, "no url provided", http.StatusBadRequest)
return
}
url := r.Form["url"][0]
shortID, err := urlShortener.Shorten(url)
if err != nil {
http.Error(w, fmt.Sprintf("unable to shorten %s: %v", url, err), http.StatusInternalServerError)
return
}
renderPage(w, "created.html", struct {
ShortID string
}{shortID})
}
Notice the correspondence between the anonymous struct and the created.html template shown in the View section.
The redirect handler slices the URL path to remove the leading /, finds the corresponding key in the Shortener and redirects to the stored URL if found.
func handleRedirect(w http.ResponseWriter, r *http.Request) {
key := r.URL.Path[1:]
url, err := urlShortener.Get(key)
if err != nil {
http.Error(w, fmt.Sprintf("unable to find %s: %v", key, err), http.StatusNotFound)
return
}
http.Redirect(w, r, url, http.StatusTemporaryRedirect)
}
Finally, this central router is registered using a public RegisterHandlers function.
func RegisterHandlers(mux *http.ServeMux) {
mux.HandleFunc("/", handleHomePage)
}
Takeaways:
http/httpest package but were skipped given the retroactive nature of this writeup.These conclusions are written with the benefit of hindsight given that this article was finished several months I initially started writing it. Since then I’ve learned a bit more about Golang and have several ideas on how I’d further tweak this application.
For those that are new to Go, I’d definitely recommend writing a basic web applciation with minimal dependencies. The standard library has a lot of powerful abstractions out of the box, and understanding these abstractions can give you a better sense of how to write idiomatic Go code. You can always view the standard library source code in the godocs if you’re ever wondering how it’s implemented under the hood (Ex: (http.HandleFunc)[https://golang.org/src/net/http/server.go?s=73427:73498#L2396])
If you’ve made it this far, thanks for taking the time to read this article! If you have any comments, suggestions or corrections, please let me know below!
]]>Developing an Alexa Skill turned out to be an interesting challenge largely due to the Voice User Interface (VUI). The basics of developing a skill are fairly straightforward (the experience is somewhat similar to developing a chatbot):
The Voice User Interface is defined within the Alexa Skill Console: when a user invokes your skill Amazon will use the interaction model you define to parse the user’s utterance and map it to your skill’s intents. Interaction with your skill occurs primarily through custom and Amazon defined intents, each of which represent some user action/request. In the VUI you’ll define a number of sample utterances that a user can speak to invoke that specific intent. To accept arguments from a user you define slots within your sample utterances - slot types may be officially supported Alexa Slots, for things such as names and places, or a Custom Slot type that you define.
The Lambda Function (or server) is a handler that is invoked with a formatted Alexa VUI JSON message containing the parsed user’s utterance. The handler is responsible for processing each event/request and responding with a JSON formatted response with that contains the appropriate Speech Synthesis Markup Language (SSML) text to be spoken to the user. If you use the Alexa SDK most of this will be abstracted for you.
Here’s what I learned while implementing my skill:
_PLAY StateHandler and a GuessIntent defined in that handler, then the Alexa SDK will actually route remap the VUI Intent to GuessIntent_PLAY. If you want to handle common intents like AMAZON.HelpIntent or AMAZON.CancelIntent you’ll need to redefine these in each of your StateHandlers.The certification process for an Alexa Skill was surprisingly painless (and free!), especially when your skill as simple as Promis Proctor. In general, the certifiction process aims to verify that your skill functions correctly and provides a reasonably robust user experience.
Here’s what I learned:
While your experience may vary, it took me about a week to get the skill certified, including a rejection for listing an unsupported example phrase and not properly supporting synonyms.
No personal project is ever entirely done. While Promis Proctor is more of a proof of concept than anything, there are still a few areas I’d like to explore further:
Here are a few links that may prove useful if you’re also starting out with an Alexa Skill:
Got any suggestions? Anything you’d like to add? Let me know below!
]]>AngularJS is an extremely popular Single Page Application Framework maintained by Google. Given it’s popularity, it’s no surprise that developers often choose to use AngularJS as their Framework of choice when building a Hybrid Application using Cordova or Phonegap. To get the most out of this pairing, careful steps should be taken when using Cordova and AngularJS together, particularly when dealing with Cordova plugins.
Cordova Plugins are specially prepared code bundles that expose a JavaScript API, allowing HTML5 based Cordova Applications to interact with native device hardware, such as the GPS and accelerometer. Behind the scenes each plugin has native code written in the languages of the supported platforms (ex: Objective-C/Swift, Java, etc), allowing the WebView to retrieve data from device hardware when the JavaScript API is invoked. While the Cordova plugin system is extremely extensible, one common failling is that application JavaScript code may execute before Cordova has finished loading in its entirety. This can lead to plugins not functioning properly, or the application crashing altogether. To avoid this potential race condition, we should leverage the deviceready event provided by Cordova within our applications.
There are several ways we can go about using Cordova Plugins with AngularJS:
deviceready event has been fired. We could configure a Factory to listen for the deviceready event and then chain promises, as seen below. While this works, it still increases the overall complexity of our application by requiring each of the developers to remember to safety check each plugin call.DeviceReadyFactory.ready()
.then(function() {
// Cordova Plugin Call
});
}deviceready event. With this approach, by the time our AngularJS Application code is running all of our Cordova plugin APIs are safe to execute. Manual bootstrapping allows us to maintain cleaner code at the potential cost of delaying the startup of our application.In this article we’ll demonstrate one approach to manually bootstrapping a Cordova/AngularJS application. To do this, we’ll discuss automatic and manual bootstrapping of AngularJS applications and show an example Cordova application using AngularJS, ngCordova and cordova-plugin-geolocation.
The most common way to initialize an AngularJS application is by using the ng-app directive:
<body ng-app="myApp">
<!-- AngularJS Code Evaluated here -->
</body>The ng-app directive performs automatic bootstrapping of an AngularJS application and initializes the root module of the application if one is specified (in this case, myApp). The ng-app directive also indicates the root element of the AngularJS application - any AngularJS template code or directives found within the root element will be evaluated by AngularJS during the application bootstrapping process.
In order to start our AngularJS application after the deviceready event, we will need to have manual control over when our AngularJS application bootstraps. To accomplish this, we can instead utilize the (https://docs.angularjs.org/api/ng/function/angular.bootstrap)[bootstrap] function provided by angular. The bootstrap function allows us to specify the root element of our application, as well as any AngularJS modules that should be initialized. For example, we could instead launch the above application with the following JavaScript code:
angular.bootstrap(document.body, ['myApp']);Note: Only use one bootstrap method for an AngularJS application. If using angular.bootstrap, do not include an ng-app directive and vice versa.
Using the AngularJS Bootstrap function, we can now initialize our AngularJS application in the deviceready event listener callback function:
document.addEventListener('deviceready', function() {
angular.bootstrap(document.body, ['myApp']);
}, false);You can see this approach demonstrated in the sample AngularJS/Cordova application detailed in the subsequent sections.
Our sample Cordova/AngularJS application consists of one view that displays Geolocation Data in a Bootstrap Panel.
The application itself consists of three main files:
index.html - Contains a single AngularJS view using ng-controller to provide context data.
app.js - Declares our AngularJS Module and defines the ApplicationController.
init.js - Contains JavaScript code responsible for bootstrapping our application using an extension of the manual bootstrapping technique. In addition to bootstrapping after the deviceready event, the application will launch if run outside of the Cordova environment, which may be useful for live reload tools or debugging.
Our example uses the following dependencies for this example, managed via bower:
We also be used the cordova-plugin-geolocation and cordova-plugin-dialogs plugins in conjunction with ngCordova.
For this example we created a default Cordova application with the CLI and installed the above dependencies using Bower. We then installed the specified Cordova plugins using the CLI.
You can check out the complete code on github.
Manually bootstrapping your AngularJS application after the deviceready event is just one method of avoiding potential race conditions when using Cordova plugins in AngularJS.
Do you have different ways of handling Cordova Plugins and AngularJS? Post them below!
]]>Thanks for taking the time to check out my site!
]]>