# Introduction

The following documentation is for those who are looking to contribute to the wh2o project development. This document is broken up by Technology-of-Interest to help you find a part of the project that interests you the most.

See below for a quick overview of the stack. Or skip to the Getting Started section. If you'd like an overview of the project mission, see the About page

# Brief Overview

90% of the codebase is written in Go (opens new window). There are many things about Go that we like, particularly, type safety, built-in testing as well as ease of concurrency and distribution.

The backend is split up into a series of HTTP servers and functions that can either be run as Lambdas or as Kubernetes CronJobs. Longer running tasks like web scraping and data processing we run in K8s.

Here's a rough outline of the backend system.

- Nginx (Ingress)
  |- Primary API (CRUD + Auth)
  |- Notify (Dispatches Email + SMS)
  |- Index (Interfaces with Elasticsearch)
  |- Forecast (Machine Learning stuff)
  |- Functions (Lambdas + CronJobs)
      |- data (cleans up database and S3)
      |- report (checks user daily reports)
      |- usgs (fetches readings from USGS)
      |- auckland (fetches New Zealand readings)
      |- canada (fetches Canadian readings)
      |- chile (fetches Chilean readings)

// @TODO make a fancy sys diagram.

# Native

We've shifted focus from the web browser based client (opens new window) being the primary interface to a React Native (opens new window) application (Super fun!). The native application allows us to send immediate Push notifications in addition to Email and SMS.

# Need Help With

  • Android builds.

# Elasticsearch

In active development for the SearchV2 endpoints, we use Elasticsearch to index gages and rivers.

# Need Help With

  • Query ordering + sorting
  • Pagination strategies

# Kubernetes

The system is self-hosted on a multi-node K8s cluster leveraging Microk8s (opens new window). As of today, the system is not highly available. The plan is to fully dial-in the data model, secure the cluster, and generally optimize all the moving parts of the app before migrating to EKS (opens new window).

# Need Help With

  • Network Policies + general security

# Browser Automation

We use a series of web scrapers, built with Rod (opens new window), to pull gage readings from various sources. In an effort to be responsible netizens, scrapers have been written in such a way to not overburden target sites/servers.

# Need Help With

  • Scrapers occasionally time out when running inside a container. This needs investigation.

# Machine Learning

The Gage Reading Forecasting API is written in Python and leverages the Facebook Prophet (opens new window) library and behind a simple Flask (opens new window) HTTP server.

// @TODO add notes

  • Need to support other non-USGS gage sources
  • Add other inputs/variables like snowpack


Go makes it incredibly easy to write Command-Line Interfaces. We use urfave/cli (opens new window) to help with the development workflow.

# Need Help With

  • We have a ton of bash scripts that need to be migrated over to CLI commands.

See also CLI

# React + TypeScript

The frontend (opens new window) is a simple SPA bootstrapped with Create React App (opens new window) (we have not had to eject yet) and TypeScript (opens new window).

# Need Help With

  • Responsive behavior. Unfortunately, the frontend was not built mobile-first and we are now playing catch-up.
  • Explore alternatives to antd (opens new window). This framework was chosen because it offers some great components and beautiful micro-animations but out-of-the-box mobile behavior is not so good.

# Internationalization

We currently support English, Spanish and French. The frontend leverages react-i18next (opens new window) for hard coded text content. In the data, each model has an accompanying Translation association for each language.

# Future State