Difference between revisions of "Software: Overview"

From HiveTool
Jump to: navigation, search
(Logging the Data)
(Data Logger)
Line 36: Line 36:
  
 
====Weather====
 
====Weather====
cURL is used to pull the local weather conditions from Weather Underground in xml format. [http://en.wikipedia.org/wiki/Grep grep] and [http://en.wikipedia.org/wiki/Regular_expression regular expressions] are used to parse the data.
+
cURL is used to pull the local weather conditions from Weather Underground in xml format and write it to a temporary file, /tmp/wx.html. [http://en.wikipedia.org/wiki/Grep grep] and [http://en.wikipedia.org/wiki/Regular_expression regular expressions] are used to parse the data.
  
 
=== Logging the Data ===
 
=== Logging the Data ===
  
After hive.sh reads the sensors and grts the weather, the data is appended to a flat text log file [[hive.log]], stored in a local SQL database by sql.sh,  and written in xml format to the temporary file [[/tmp/hive.xml]] by xml.sh. [[cURL]] is used to send the xml file to a hosted web server where a perl script extracts the xml encoded data and inserts a row into the database.
+
After hive.sh reads the sensors and gets the weather, the data is appended to a flat text log file [[hive.log]], stored in a local SQL database by sql.sh,  and written in xml format to the temporary file [[/tmp/hive.xml]] by xml.sh. [[cURL]] is used to send the xml file to a hosted web server where a perl script extracts the xml encoded data and inserts a row into the database.
  
 
== Bioserver ==
 
== Bioserver ==

Revision as of 02:25, 8 November 2014

Software is continuously being developed - this page is alway out of date...

For detailed installation instructions, see How to load hivetool on the Pi. This project uses Free and Open Source Software (FOSS). The operating system is Linux although everything should run under Microsoft Windows. The code is available at GitHub.

Hivetool can be used as a:

  1. Data logger that provides data acquisition and storage.
  2. Bioserver that displays, streams, analyzes and visualizes the data in addition to data acquisition and storage.

Both of these options can be run with or without access to the internet. The bioserver requires additional software and configuration of a webserver (usually apache), the perl module GD::Graph, and perhaps a media server such as Icecast http://www.icecast.org/ or FFserver http://www.ffmpeg.org/ffserver.html to record and/or stream audio and video.

Linux Distributions

Linux distros that have been tested are:

  1. Debian Wheezy (Pi)
  2. Lubuntu (lightweight Ubuntu)
  3. Slackware 13.0

Data Logger

Software Flow Diagram - Data logger core
Software Flow Diagram - Key

Scheduling

Every 5 minutes cron kicks off the bash script hive.sh that reads the sensors and logs the data.

Initialization

Starting with Hivetool ver 0.5, the text file hive.conf is first read to determine which sensors are used and to retrieve their calibration parameters.

Reading the Sensors

Hive Weight

Several different scales are supported. For tips on scales that use serial communication see Scale Communication. Scales based on the HX711(see Frameless Scale or the AD7193 (Phidget Bridge) Analog to Digital Converter are supported on the Pi.

Temperature and Humidity Sensors

tempered reads the RDing TEMPerHUM USB thermometer/hygrometer. Source code is at github.com/edorfaus/TEMPered Detailed instructions for installing TEMPered on the Pi.

Weather

cURL is used to pull the local weather conditions from Weather Underground in xml format and write it to a temporary file, /tmp/wx.html. grep and regular expressions are used to parse the data.

Logging the Data

After hive.sh reads the sensors and gets the weather, the data is appended to a flat text log file hive.log, stored in a local SQL database by sql.sh, and written in xml format to the temporary file /tmp/hive.xml by xml.sh. cURL is used to send the xml file to a hosted web server where a perl script extracts the xml encoded data and inserts a row into the database.

Bioserver

Software Flow Diagram - Bioserver

Just as a mail server serves up email and a web server dishes out web pages, a biological data server, or bioserver, serves biological data that it has monitored, analyzed and visualized.

Visualizing the Data

GD::Graph

The Perl module GD::Graph is used to plot the data. The graphs and data are displayed with a web server, usually Apache. More detailed installation instructions are on the Forums.

Displaying the Data

Apache Web Server

When a request from a web browser comes in, the webserver kicks off hive_stats.pl that queries the database for current, minimum, maximum, and average data values and generates the html page. Embedded in the html page is a image link to hive_graph.pl that queries the database for the detailed data and returns the data in tabular form for download or generates and returns the graph as a gif. hive_graph.pl can be called as a stand alone program to embed a graph in a web page on another site.

Audio

IceCast

ffserver

Video