Integrate with AWS Comprehend SDK in C#

Overview

AWS Comprehend is an easy to use cloud service that allows you to comprehend text by labeling each word with a language type such as verb, noun, proper noun and others. A probability rating is also assigned allowing you to detemine how sure AWS Comprehend was about it’s clasification.

In the post I will demonstrate how to use AWS Comprehend to take plain text and find specific word matches and bind them with executable commands. The commands I use here assumes we want to generate code repositories/projects.

Example Usage

Given Text: Create a new website named Eventz
Task Output: Generate a new React git repo and name it Eventz

  • Create a verb that indicates the need to make something from scratch.
  • new an adverb that indicates the something that needs to be made from scratch.
  • website a noun that indicates that front-end programming technology is to be used.
  • Eventz a proper noun that represents the “something’s” identity or name.

NOTE: This is a very basic example and will only work for a couple of very specific text phrases. For more accuracy we’d need to utilize other techniques such as Large Language Models (LLM) to determine if the word is similar to something VS just a direct match.
e.g. website/front-end/page could all be considered accurate as web specific project.

Example Repo

You can checkout this example repo where I invoke the dotnet cli to manage projects based on a text phrase. It uses AWS Comprehend to extract meaning and matches it up with pre defined dotnet cli commands.

Basic Use

AWS Comprehend SDK

Install via package manager console.
Install-Package AWSSDK.Comprehend -Version 3.7.104.10

Or add directly to your csproj file

1
2
3
<ItemGroup>
<PackageReference Include="AWSSDK.Comprehend" Version="3.7.104.10" />
</ItemGroup>

Configuration

1
2
3
4
5
6
7
using Amazon;
using Amazon.Comprehend;
using Amazon.Comprehend.Model;
using System.Diagnostics;

AWSConfigs.AWSProfileName = "your-aws-profile";
var comprehendClient = new AmazonComprehendClient();

Things our system cares about

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
var verbs = new string[] { "create", "delete" };
var projectTypes = new string[] { "desktop", "website" }
var actions = new List<List<string>>
{
{ "create", "desktop", CreateDesktop },
{ "create", "website", CreateWebsite },
{ "delete", "desktop", DeleteDesktop },
{ "delete", "website", DeleteWebsite }
};

public void CreateDesktop (string projectName)
{
// Some code that creates a desktop application code repository
}

// ....
// CreateWebsite(string projectName)
// DeleteDesktop(string projectName)
// DeleteWebsite(string projectName)

Comprehention Detect Syntax (clasify)

1
2
3
4
5
6
var detectedSyntaxRequest = new DetectSyntaxRequest
{
LanguageCode = SyntaxLanguageCode.En,
Text = "create a new website named Eventz"
};
var detectedSyntaxResponse = await comprehendClient.DetectSyntaxAsync(detectedSyntaxRequest);

Process the detected syntaxt repsonse

At this point we have a categorized version of our original text. By processing the words and their assigned Part Of Speech Tag Type we determine if there are words that match parts our system cares about. Once we find a match, we store those words as specific variables that we’ll use later to build executable commands.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
// The thing that needs to be done. e.g. Creating/Deleting
string projectAction = "";

// The type of thing. e.g. Desktop/Website code repository type
string projectType = "";

// Finally the the name of the thing. e.g. Eventz
string projectName = "";

foreach (var syntaxToken in detectedSyntaxResponse.SyntaxTokens)
{
var partOfSpeechTag = syntaxToken.PartOfSpeech.Tag;

if (partOfSpeechTag == PartOfSpeechTagType.VERB && verbs.Contains(partOfSpeechTag.ToLower()))
{
projectAction = syntaxToken.Text;
}

if (partOfSpeechTag == PartOfSpeechTagType.NOUN && projectTypes.Contains(partOfSpeechTag.ToLower()))
{
projectType = syntaxToken.Text;
}

// Proper nouns contain are determined to have high probability of being a name
if (partOfSpeechTag == PartOfSpeechTagType.PROPN)
{
// Preserve casing
projectName = syntaxToken.Text;
}
}

Bind comprehentions the executable commands

1
2
3
4
5
6
7
8
var action = actions.Where(x => x.Contains(projectAction) && x.Contains(projectType));

// The third item (0 based = 2) in each action list represents the action to execute.
var actionCommand = action[2];

// Execute the action command that will create or delete a desktop or website by name
action(projectName);

ChatGPT C# Integration

Overview

Just like many others I’ve been very excited about using ChatGPT.
Here is a simple example of how you can integrate your own project with ChatGPT.

Alternatively you can checkout this simple proof reader I made using ChatGPT.

Pre Requisites

You need to setup a billing account.
https://platform.openai.com/account/billing/payment-methods

You need to create an API key.
https://platform.openai.com/account/api-keys

Store your API Key securely.
Note, I store it at the machine level but you could do this at process or user level.
e.g. Windows PC

1
[System.Environment]::SetEnvironmentVariable('MY_OPEN_AI_API_KEY', 'your-sdk-api-key-value', 'Machine')

SDK

I opted to use a SDK to integrate with ChatGPT’s API.

Here is their list of official SDK’s.

I opted for a C# library, Betalgo/OpenAi by Betalgo.

Sample Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
using OpenAI.GPT3;
using OpenAI.GPT3.Managers;
using OpenAI.GPT3.ObjectModels.RequestModels;
using OpenAI.GPT3.ObjectModels;

var apiKey = Environment.GetEnvironmentVariable(
"MY_OPEN_AI_API_KEY", EnvironmentVariableTarget.Machine);

var openAIService = new OpenAIService(
new OpenAiOptions() { ApiKey = apiKey });

var request = new CompletionCreateRequest
{
Prompt = "Give me a list of books on AI",
Model = Models.TextDavinciV3,
MaxTokens = 1000,
};

var result = await _openAIService.Completions.CreateCompletion(request);

Console.WriteLine(result.Choices.FirstOrDefault().Text);

How to Specify an AWS Profile in .NET SDK for DynamoDB

Intro

Working with Amazon Web Services (AWS) DynamoDB in .NET applications can be a breeze with the AWS SDK for .NET. However, when working with multiple AWS accounts or users, it can become cumbersome to keep track of the different sets of AWS credentials. This is where AWS profiles come in handy.

In this article, we will explore how to specify an AWS profile while working with the .NET SDK for AWS DynamoDB. By using an AWS profile, you can simplify the process of managing multiple AWS accounts and ensure that your application uses the correct set of AWS credentials.

Then we’ll provide a step-by-step guide on how to set up an AWS profile, install and configure the AWS SDK for .NET, and use the AWS profile with the .NET SDK for DynamoDB. By the end of this article, you’ll have a better understanding of how to simplify your AWS credential management while working with DynamoDB in your .NET applications.

Setting up the AWS Profile

Prerequisites

Steps

To set up your AWS profile, follow these steps:

  1. Open your command prompt or terminal window.
  2. Run the aws configure command. This will prompt you for your AWS access key ID, secret access key, default region, and default output format preferences. Enter the values as prompted.
  3. After you have provided the necessary information, AWS CLI will save the settings as the default profile in two files: .aws/config and .aws/credentials.
  4. To set up an additional profile, run the aws configure command again, but this time include the --profile MyProfileName option. Replace MyProfileName with a name for your new profile.
  5. Provide the necessary information for your new profile, such as access key ID, secret access key, region, and output format preferences, when prompted.
  6. AWS CLI will save the settings for your new profile in the .aws/config and .aws/credentials files.

That’s it! You now have multiple AWS profiles that you can use with the .NET SDK for DynamoDB.

Installing and Configuring the AWS SDK for .NET

Installing

To install the AWSSDK.DynamoDBv2 package:

  1. In Visual Studio’s Solution Explorer for your project, right-click to show the context menu, and choose “Manage NuGet Packages”.
  2. In the NuGet Package Manager, search for the AWSSDK.DynamoDBv2 package and select the latest version to install.

Your project will now have the necessary SDK packages installed for connecting to DynamoDB.

Configuring

To use the default credentials with the AmazonDynamoDBClient object, you can simply instantiate the object without any parameters:

1
AmazonDynamoDBClient client = new AmazonDynamoDBClient(); 

If you need to specify additional configuration options for your AmazonDynamoDBClient, you can create an instance of AmazonDynamoDBConfig and pass it to the constructor. For example, to set the AWS region to us-west-2, you can use the following code:

1
2
AmazonDynamoDBConfig clientConfig = new AmazonDynamoDBConfig { RegionEndpoint = RegionEndpoint.USWest2 };
AmazonDynamoDBClient client = new AmazonDynamoDBClient(clientConfig);

Using the AWS Profile with the .NET SDK for DynamoDB

Configure an AWS profile

If you want to use a different AWS account than the one configured as default, you can use the profile you created in Section 2. To do this, you can specify the profile name and retrieve associated profile credentials. This code creates a CredentialProfileStoreChain object, which loads the credentials associated with a named profile from the AWS credentials file (~/.aws/credentials). If found, the credentials are passed to the AmazonDynamoDBClient object.

1
2
3
4
5
6
var credentialProfileStoreChain = new CredentialProfileStoreChain();
if (credentialProfileStoreChain.TryGetAWSCredentials("MyProfileName", out var awsCredentials))
{
AmazonDynamoDBConfig clientConfig = new AmazonDynamoDBConfig { RegionEndpoint = RegionEndpoint.USWest2 };
AmazonDynamoDBClient client = new AmazonDynamoDBClient(awsCredentials, clientConfig);
}

Note that you still need to specify the AWS region where your table is located, as shown in the code above.

Advantages of using AWS profiles with the SDK

Using an AWS profile with the SDK has several advantages over hardcoding access and secret keys directly in your code. For one, it’s more secure: instead of embedding your credentials in plain text within your code, you store them in a separate file that’s encrypted and protected. This makes it more difficult for attackers to gain access to your AWS account.

Another advantage is that it simplifies the process of switching between AWS accounts or using different credentials for different applications. By creating multiple profiles with different access and secret keys, you can easily switch between them without having to modify your code or credentials each time.

Overall, using an AWS profile with the SDK improves security, simplifies the management of multiple AWS accounts, and makes it easier to maintain and modify your code.

Conclusion

Summary

In this blog post, we covered the steps for setting up and using AWS profiles with the .NET SDK for DynamoDB. We began by introducing the concept of AWS profiles and the benefits of using them, including increased security and ease of use. We then provided a step-by-step guide for setting up an AWS profile, including prerequisites such as installing the AWS CLI and generating access and security keys.

Next, we covered how to install and configure the AWS SDK for .NET and how to use the default profile. Finally, we explained how to use an AWS profile with the .NET SDK for DynamoDB, including an example code snippet and the advantages of using profiles for flexibility and ease of management.

By following the instructions and best practices outlined in this blog post, you should be able to set up and use AWS profiles with the .NET SDK for DynamoDB more effectively and securely, giving you greater peace of mind and allowing you to focus on building great applications.

Resources

AWS CLI installation
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

Access and Secret access keys
https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys

AWS Credentials Configuration
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/CodeSamples.DotNet.html#CodeSamples.DotNet.Credentials

CredentialsProfileStoreChain and Alternatives
https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/creds-locate.html

SQL Cheatsheet

Info

After struggling for hours on the interwebs I usually find the magic combination of SQL syntax for solving a particular problem.
The problem comes in when I don’t frequently work with SQL and forget what the syntax was that I used.

Here is a collection of my favourite SQL snippets

Updates

Simple Update queryies specifies the table to modify, the columns and values to set and a Where clause to avoid updating all records in the table.

1
2
3
UPDATE MyTable
SET Name = 'sql noob'
WHERE Id = 1

But what if you want to update multiple records each with a unique set of value. We can use an Update From approach. Typicall this requires a datasource containing the desired changes you want to apply to your table, and hopefully a matching identifier. If you have this you can join your datasource to the table you intend to change and do away with Where.

1
2
3
4
5
UPDATE MyTable
SET Name = tmp.Name
FROM #TempWithUpdates tmp
INNER JOIN MyTable my
ON tmp.Id = my.Id

Insert

Adds new rows to a table.

1
2
INSERT INTO table_name (column1, column2, column3, ...)
VALUES (value1, value2, value3, ...);

Delete

Removes one or more rows from a table.

1
2
DELETE FROM table_name
WHERE condition;

NOTE: the condition must be true for a row to be deleted

Subqueries

A query within another query. The results of the subquery are used in the main query.

1
2
3
SELECT column_name(s)
FROM table_name
WHERE column_name IN (SELECT column_name FROM table_name WHERE condition);

Merge

Combines data from two tables into one.

1
2
3
4
5
6
MERGE INTO target_table USING source_table
ON target_table.column = source_table.column
WHEN MATCHED THEN
UPDATE SET target_table.column = source_table.column
WHEN NOT MATCHED THEN
INSERT (column1, column2, column3, ...) VALUES (source_table.column1, source_table.column2, source_table.column3, ...);

Temp Tables

todo

WIP

where not exists
https://stackoverflow.com/questions/35857726/select-records-from-a-table-where-two-columns-are-not-present-in-another-table

where in
https://stackoverflow.com/questions/17548751/how-to-write-a-sql-delete-statement-with-a-select-statement-in-the-where-clause

insert into subquery
https://stackoverflow.com/questions/9692319/how-can-i-insert-values-into-a-table-using-a-subquery-with-more-than-one-result

cte VS temp table vs table variable
https://www.dotnettricks.com/learn/sqlserver/difference-between-cte-and-temp-table-and-table-variable#:~:text=Temp%20Tables%20are%20physically%20created,the%20scope%20of%20a%20statement.
https://stackoverflow.com/questions/2920836/local-and-global-temporary-tables-in-sql-server#:~:text=Local%20temporary%20tables%20(%20CREATE%20TABLE,have%20referenced%20them%20have%20closed

merge output
https://www.sqlservercentral.com/articles/the-output-clause-for-the-merge-statements

Amazon API Gateway - Throttling

Intro

API Gateway provides me with great coordination and routing capabilities, but without the neccessary checks it can also put a big dent in my pocket. 💲💲💲

I often prototype ideas that don’t check the “production ready” criteria. This means that I can put something together quickly and have it meet my needs, but at the same time be vulnerable to accidental/intentional over-use.

The post constains details on how throttling works and how to configure it, even for smaller projects. 😉

Request submission measurement

A “token bucket algorithm” is used to measure request submision rate.

API Gateway doesn’t enforce a quota on concurrent connections. The maximum number of concurrent connections is determined by the rate of new connections per second and maximum connection duration of two hours. For example, with the default quota of 500 new connections per second, if clients connect at the maximum rate over two hours, API Gateway can serve up to 3,600,000 concurrent connections.

Request measurement types

There are two types of usage clasification used to messure API requests, namely, steady-state and burst request.

Steady State

A request per second (RPS) measure accross all API’s within an AWS account, per Region. This is what can be considered normal request rate traffic for your API endpoints. Also refered to as the token.

Burst

A maximum concurrent request rate accross all API’s within an AWS account, per Region. Typically and unexpected amount of request in a given period of time. Also refered to as the bucket.

Throttling options

API Gateway provides these options for configuring throttling:

  • Account-level: All routes and stages use the same throttling limit
  • Stage-level: All routes in the statge uses the same throttling limit
  • Route-level: Only the individual route uses the configured throttling limit

NOTE: the default route throttling limits can’t exceed the account-level rate limits, we have to configure explicitly to do so.

Limitation and quotas

There is a cap of what limits can be set per account per region. This can however be adjusted via by reaching out to AWS Support Center.

So what does the API caller see when throttled?

They get back a 429 Too Many Requests error response

References

Sharing S3 files securely with Signed Urls

Background

I built a single page application (SPA) that allows users to get download access to a game I’m developing in my spare time. After downloading and playing the game, users can also can fill in a survey to provide any feedback or suggestions.
This post details how I’m securing the download link so that only authorized users have access.

Component flow

  1. The app can be logged into via AWS Cognito. This authorizes the user session and allows secure calls to Amazon APIGateway.
  2. A RESTfull Amazon API Gateway is used to invoke AWS Lambda functions that retrieve and store any data needed by the app.
  3. Survey and other data is stored in and Amazon DynamoDb table.
  4. Game download files are stored in an Amazon S3 bucket.

S3 Signed Urls

The solution is to secure the download by generating a signed url based on the users current auth session. Once they log out, their session expires and the the Url no longer works. From the different methods of generating signed urls, using a SDK fits my use case the best as it allows the ability to programatically create a Url when the app loads information.
Generating a presigned URL to share an object

By default, all S3 objects are private. Only the object owner has permission to access them. However, the object owner can optionally share objects with others by creating a presigned URL, using their own security credentials, to grant time-limited permission to download the objects.

But how?

I decided to create a new Lambda function that can access my S3 bucket and generate a signed Url. The AWS how-to guide lists four SDK options:

  • Java
  • .NET
  • PHP
  • Python

Python Lambda

I’ve never done any Python before, but the sample looked the most straight forward and thus I opted to use it.
Here’s what I ended up with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import json
import boto3

def lambda_handler(event, context):

url = boto3.client('s3').generate_presigned_url(
ClientMethod='get_object',
Params={'Bucket': 'my-bucket-name', 'Key': 'game-dist-build-version'},
ExpiresIn=86400)

return {
'statusCode': 200,
'body': url,
'headers': {
'Access-Control-Allow-Origin': '*',
}
}

Boto3

What is it? It’s a Python SDK used to access AWS services.
Boto3 documentation

You use the AWS SDK for Python (Boto3) to create, configure, and manage AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK provides an object-oriented API as well as low-level access to AWS services

boto3.client('s3').generate_presigned_url
Here we request the S3 ‘module’ and execute the generate_presigned_url functionality.

Parameters

  • Bucket: The name of the S3 bucket that contains the build artifacts for my game.
  • Key: The specific version of the game I’m granting access to.
  • ExpiresIn: How long the Url remains valid for. 86400 seconds translates to one day.

Other steps

AWS Amplify authentication with Cognito

AWS Amplify takes away a lot of the heavy lifting around hosting and deploying single page applications (SPA). But an application is useless if we can’t control user access to it.

Overview

This post details the actions needed to:

  • Create an Amplify project
  • Add Cognito authentication to Amplify
  • Add Users and User Groups to Cognito

Assumptions

  • You have a React app (I used create-react-app)
  • You have this app in a compatible Amplify version control (I used CodeCommit)

Create an Amplify project

Start in the AWS Amplify Console and select the Build app option

Select the Hosting environments tab

  • Select a code repository the contain your UI code
    • Connect a branch (select repo and branch)
  • Configure build settings
    • Create environment (e.g. staging/prod)
    • Enable CICD (deploy on each front or backend commit)
  • Specify a service role (Allows Amplify to access your other AWS resources)
    • Create a Service Role
      • Select Amplify service
      • Select AdministratorAccess-Amplify
  • After this you can review build settings in a generated yaml file
  • Save and deploy the app
  • You can now click on the Domain URL to view your live app.

Authentication

Configure Amplify App

Select the Backend environments tab

  • Initialy you have to click “Get Started” to enable the Backend tab. This takes a few minutes
  • Click Launch Studio
  • Go to the Set up > Authentication tab
    • Add a login mechanism. (this is where you can specify federate login types. e.g. Google or Facebook). I chose a simple Email mechanism.
    • Specify Sign-in & sign-out URL’s.
      • This allows for multiple URLs. I specified both my local and production URL’s to allow login requirements for both environments.
    • You and also setup MFA and password rulese here. I opted for all the defaults, no MFA.
    • Deploy
  • Save the new authentication setup. This will start a CloudFormation deployment.
  • You can view the created authentication resouces in Cognito or CloudFormation.

Add users and groups

While still in the Amplify Studio

  • Select the Manage > User management tab
  • In the Users tab click Create User
    • Specify email and temporary password
  • In the Groups tab click Create Group
    • Specify a group name
    • Select the new group and click Add User
    • Specify the users to add

Configure SPA App

Install the amplify CLI from here

1
npm install -g @aws-amplify/cli
  • From the Amplify application, Backend environments tab, find and expand the “Local setup instructions” toggle drop down.
  • Copy and run the amplify pull --appId {your app id} --envName {your env} command in your UI project.

This will sync the Cognito infrastructure and client credentials with your project. NOTE, the credentials are not commited to your source control. see “aws-exports.js” file

Install the aws amplify npm dependencies

1
npm install aws-amplify

Add a UI mechanism for sign-up, sign-in and sign-out.
Two options for doing this are:

  1. Pre-built UI components (Opting for this one)
  2. API calls

Install components for React

1
npm install aws-amplify @aws-amplify/ui-react

Example of my App.js file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import { Amplify } from 'aws-amplify';

import { withAuthenticator } from '@aws-amplify/ui-react';
import '@aws-amplify/ui-react/styles.css';

import awsExports from './aws-exports';
Amplify.configure(awsExports);

function App({signOut, user}) {
return (
<>
<h1>Hello {user.username}</h1>
<button onClick={signOut}>Sign out</button>
</>
);
}

export default withAuthenticator(App);

Disable self-service sign-up

By default users can register as user by following a Sign-up and email verification flow. For my purposes however I want a very specific list of users to have access to the app, thus I’d want to prevent the default sign-up action being possible.

Go to AWS Cognito in the console

  • Select the user pool
  • Under the Sign-up experience tab select Edit self-service sign-up
    • Uncheck Enable self-registration
    • Save changes

NOTE: this does not hide the “Create Account” option, merely prevents successful sign-ups. Ideal would be to hide as well.

Resources

https://docs.amplify.aws/console/
https://docs.amplify.aws/console/auth/authentication/
https://docs.amplify.aws/console/authz/authorization/

Toggle nodejs versions

There has been a number times when I tried new frameworks or templates that depend on specific version of nodejs. When this happens I’m faced with skipping on the idea or messing around with my install version of nodejs, and inavertantly causing issue for some of my other projects.

A simple solution

nvm - Node Version Manager

Allows easy installations and toggling of nodejs versions

Linux Install nvm

1
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash

Windows Install nvm

Find the latest release from the NVM Github page
Download the nvm-setup.exe and run it on you local computer.

Install nodejs version

1
2
nvm install 12
nvm install 14

Toggle current nodejs version

1
nvm use 14

Neural Networks For This Dummy

I’m really interested in AI and Machine learning. I’ve been curious about neural networks for quite some time, especially since they try to simulate how the human brain works

Goals

  • Understand the how Neurons and Synapses work together
  • Understand how a network can makes accurate predictions
  • Build a prediction system that acts on custom attributes

Idea - Bug Bash 🐛

What shall we predict?

Given two insects, each with different attributes, which is most likely to win in a fight?

How will this work?

Attributes will have weight values that when summed creates a score.
The higher the score, the more likely to win.
The neural network will Not know about this score however, rather, only which attributes the insect has.
Given training data containing the attributes of both insects and a value to indicate the selected victor, the network should be able to identify a pattern and start accurately predicting results.

Attributes

  • Strength
  • Speed
  • Toughness
  • Rested

Example

1 = yes the insect has this attribute
0 = no the insect does not have this attribute
[strength, speed, toughness, rested]

InsectA = A
[1, 0, 1, 0] = strong, slow, tough and tired

InsectB = B
[0, 1, 0, 1] = weak, fast, fragile and rested

How will we feed the network this info?

Note: below is an exaple of the input structure, not total input.
We will need to give the network more than one example in order to train correctly

Training Input
InsectA [1, 0, 1, 0] + InsectB [0, 1, 0, 1]
Input = [1, 0, 1, 0, 0, 1, 0, 1]

Training Expected Output
0 = InsectA wins
1 = InsectB wins

Test if it works

Create a new insects with attributes combinations the network has not seen yet.
Run the networks predic opperation with this input and determine if it predicts the victor correctly.

Code sample

DNN-Demo

Conclusion

The sample code accurately predicts which insect will win.
The output is usually a decimal e.g. 0.7 and thus you have to round up or down to get a 0 or 1
e.g. 0.7 = 1, 0.3 = 0

Eventhough the prediction works as expected, I realized that the training data is what matters most for accuracy.
If you manipulate it we can make it predict the opposite to where the Insect with the least attributes win.
Additionally, the sample does not show the benefit of a small scale network over normal conditional programming. I could simply sum all the attributes and select the highest values as the victor. To get more value from the neural network I suspect the prediction needs to.

  • Be more complex, something other than summing attribute values
  • Requires real dynamic training data and frequent retraining of the network.

I do however feel more comfortable about the topic and thus satisfied with this effort.

How it works

Network

A neural network simulated in code, consists of input, hidden and output layers.
Input layers receive the initial values you provide the systems.
Hidden layers pass values from input to output layers or to other hidden layers.
Output layers receive the results of the networks caluclation and outputs.

A layers is collection of neurons that contain a value.
Neurons connect to other neurons in other layers via a synapses.
A sysnapses contains a weight value.
As the neuron passes it’s value through the synapses the value is modified (value * weight).
Neurons receiving modified values, will pass this value throught an activation function.
Depending on the value returned from the activation function, we can either block or pass the a next neuron call.

Formula

x = input (neuron)
w = weight (synapse)
b = bias
wtx = (x1 * w1) + (x2 * w2) (total input)
z = wtx + b
a = σ(z) (sigmoid activation)

Training

  • We need to know what the desired output is give a particular set of inputs
  • Randomise starting weights
  • Compare the actual output with the desired output
  • Make small weight adjustments to compensate for the incorrect output
  • Rince and repeat untill the neural network is trained.
  • The network should now have the appropriate weights to predict acuratly when provided with similar input as the training data.

NOTE: the training “experience” lives in the weights of the synapses. This is what we would store to create pre trained networks.

Weighted Derivative Formula

But how much should we adjust the weights by when the output is incorrect?

adjust weights by = error.input.output.(1 - output) add this to all synapses
error = desired result - actual result
input = what we pass into the input layer nodes
sigmoid gradient curve = output.(1 - output)

How to host a static website with AWS S3

I recently hosted a static website on AWS S3. This is pretty straight forward if you already have an AWS account and user/role configured with Adminstrator priviledges. You basically create a S3 bucket, make is public and upload your website content directly from the AWS console.

I had to tweak a couple of things on my journey and ended up re-doing some of the setup. Because of this I decided to see how to automate the infrastructure and also the content deployment process for when my static site needed updating.

TLDR

If you care less about the how and just want a quick template, go here:
aws-s3-website-template

Plan of Action

Here are the things we need to do:

  • Install the AWS CLI and configure your AWS credentials. (your local PC)
  • Create an infrastructure file that defines a publicly accessible S3 bucket.
  • Create an infrastructure deployment script.
  • Create a content deployment script

AWS CLI and config

You can install the AWS CLI from here.
Ensure you have your AWS config and credentials setup like this.

.aws/config

1
2
3
[profile your-aws-profile]
region = us-east-1
output = json

.aws/credentials

1
2
3
[your-aws-profile]
aws_access_key_id = {your access key id}
aws_secret_access_key = {your access key}

Infrastructure File

app.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
AWSTemplateFormatVersion: 2010-09-09

Parameters:
BucketName:
Type: String
Description: Bucket Name
Default: MyS3BucketWebsite

Resources:
MyS3Bucket:
Type: AWS::S3::Bucket
Description: Bestest bucket eva
Properties:
AccessControl: PublicRead
BucketName: !Ref BucketName
WebsiteConfiguration:
IndexDocument: index.html
ErrorDocument: error.html

MyS3BucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref MyS3Bucket
PolicyDocument:
Statement:
-
Action:
- s3:GetObject
Effect: Allow
Resource:
- !Sub arn:aws:s3:::${MyS3Bucket}
Principal:
AWS:
- '*'

NOTE: You are have to upload an index.html and error.html file in order for the site work properly.

Infrastructure Deployment Script

We will use CloudFormation to create the S3 bucket by executing a bash script. Typically you only do this once and can opt to just create your bucket in the AWS console instead. I like doing this scripting route for the following reasons:

  • It gives me an easy way to re create my bucket if something ever goes wrong. e.g. hackers or me
  • Serves as a template for new projects. Easy to copy, paste and modify.

I created a deployInfra.sh script that allows me to conditionally create or update the bucket.
The CloudFormation CLI command looks like this:

1
2
3
4
5
aws cloudformation "$1" \
--stack-name MyS3BucketWebsiteStack \
--template-body file://./app.yaml \
--profile your-aws-profile \
--region us-east-1

The $1 is an agument that is passed and can be either create-stack or update-stack.

See full script
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
_defaultColor=$(tput sgr0)
_infoColor=$(tput setaf 3)
_updateCommand="update-stack"
_createCommand="create-stack"

function printInfo {
printf "${_infoColor}$1${_defaultColor}"
}

function printInfoLine {
printInfo "$1 \n"
}

function executeStackCommand {
printInfoLine "Stack Command '$1' Starting..."
aws cloudformation "$1" \
--stack-name MyS3BucketWebsiteStack \
--template-body file://./app.yaml \
--profile your-aws-profile \
--region us-east-1
}

printInfoLine "UI Deploy Script Starting..."

printInfoLine "Specify if you want to update or create this stack (update/create)"
read stackCommand

if [ "$stackCommand" == 'create' ]; then
executeStackCommand $_createCommand
fi

if [ "$stackCommand" == "update" ]; then
executeStackCommand $_updateCommand
fi

if [ "$stackCommand" != "update" ] && [ "$stackCommand" != "create" ]; then
printInfoLine "Nothing selected, Goodbey!"
exit 1
fi

Content Deployment

S3 CLI comes with a really nifty command, sync. This does all the heavy lifting for you by synchronizing a source content with your S3 bucket.

  • Configure the website content as the source (on your pc)
  • Configure the S3 bucket to sync to.
1
2
3
4
aws s3 sync '/website-source-directory' 's3://MyS3BucketWebsite' \
--acl public-read \
--profile your-aws-profile \
--region us-east-1

I created a deployContent.sh script that executes the sync action.

See full script
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
_defaultColor=$(tput sgr0)
_infoColor=$(tput setaf 3)

function printInfo {
printf "${_infoColor}$1${_defaultColor}"
}

function printInfoLine {
printInfo "$1 \n"
}

printInfoLine "Sync S3 buck starting..."

aws s3 sync '/website-source-directory' 's3://MyS3BucketWebsite' \
--acl public-read \
--profile your-aws-profile \
--region us-east-1

printInfoLine "Sync S3 buck completed..."