Visions:NoobiesTutorial

From KIP Wiki
⧼kip-jumptonavigation⧽⧼kip-jumptosearch⧽

Abbreviated Names

 ECM: Eric
 JS:  Johannes
 KH:  Kai
 CM:  Christian Mauch
 CK:  Christoph
 TP:  Tom
 SS:  Sebastian Schmitt
 PM:  Paul
 AG:  Andreas Grübl
 AH:  Andreas Hartel
 BV:  Bernhard Vogginger
 OB:  Oliver Breitwieser
 MD:  Markus Dorn

Meetings

Hardware F***-up Meeting

The hardware users' meeting is usually on Monday at 10:30/ENI. Managed by SS (backup: PM)

TMA Meeting

Usually Monday 13:30/ENI

Chip Development (HICANN DLS) Meeting

Tuesday at 9:00/ENI. Managed by JS.

FPGA Development

There's a FPGA meeting every Tuesday at 15:30/ENI. Logs and agenda can be found here. Managed by AG (backup: ECM).

PCB Meeting

TODO: Wednesday?? Managed by AG.

ASIC Meeting

Wednesday, 15:00? Glasbox I?

F9/Vision(s)' Group Meeting

Thursday at 9:00/ENI. Mandatory for all. Managed by JS.

Softies' Meeting

Thursday at 14:00/ENI. Mandatory for software developers. Logs and agenda can be found here. Managed by ECM.

Workplace

Computers

We use a Debian Jessie-based default installation. The configuration is automatically managed. In case of package requests, please ask KH. Bugs and requests should be posted here.

Space

There are some places in the "Werkstatt" building (room 501) and in the container building. In case of a transient shortage of spaces, internshippers and bachelors are expected to "fill up" (i.e. they do not have a static assignment to a specific place) all available places.

Communication

To stay informed (and to provide information to others) you should join F9's irc server on bldviz. The main channels are:

Channel Topic
#softies Software and stuff
#hardies Hardware
#tma Modeling
#clusteraner Cluster usage and announcements


Clients:

  • hexchat (GUI)
  • weechat-curses (CLI)
  • Pidgin (GUI) / finch (CLI)

You can access the channels in Pidigin by creating a new account (Accounts -> Manage Accounts) using the Protocol IRC, with your KIP-username and -password. And then joining the chatroom (Buddies -> Join a Chat) when using your IRC account. (Leave the password blank)

Accounts

Typically, you will need the following accounts:

  • KIP-Account (which you have when you can read this)
  • BrainScaleS-Account (which provides Access to the gitviz-Repository)

If you work on the waferscale hardware or need access to the computer cluster ask ECM (and get an introduction by your supervisor). If you do chip or FPGA development, you need ASIC permissions (ask Markus Dorn).

KIP

The KIP login is used for accounting and authentication on all F9 desktops as well as servers.

BrainScaleS

BrainScaleS Accounts are managed by Bjoern and you should write him an email that you need one. Access to repositories is managed by dedicated project managers (i.e. your supervisor can help you). In extreme cases, you can ask ECM/KH/JS -- the redmine administrators -- for help.

Cluster Access

The F9 cluster is part of the BrainScaleS and HBP hardware systems. In times of idle nodes (i.e. the associated neuromorphic hardware parts are idle too), conventional software simulations can be run on the system. Please note, the cluster's main objective is controlling neuromorphic hardware and not number crunching. Having a KIP-Account gives you a home folder (distributed filesystem, AFS) on all machines running the default installation. However, this is not sufficient for cluster usage. You need a "wang" home and cluster access permissions. Both is managed by ECM and KH. In case of a missing cluster_home you will see an error message when you lock into a compute server.

The frontend/login nodes are named:

  • ice
  • ignatz

and you can access them via (example for ice, but works with any other name too) ssh -X ice

Server Usage

The machines mentioned above are not the compute nodes themselves, but are only the frontend to access the compute cluster. Large jobs (i.e. CPU/IO hogs or long-running things) on the frontend node will be killed by the administrators. So for heavy work (read everything after the bug/syntax fixing) please dispatch execution to the cluster:

 srun [your command]

The default job gets 1 CPU and 2GB of RAM. If your code runs in parallel or needs more memory, please specify this (e.g. 4 CPUs, 8GB RAM):

 srun -c 4 --mem 8G [your command]

To run the job in background, please use:

 sbatch --wrap [your command]

This creates a slurm-[jobid].out log file containing all the console output.

In order to check the status of your jobs the command squeue can be used.

Jobs can be aborted (cancelled) by using:

 scancel [jobid]

Accessing the BrainScaleS hardware is only possible via the wafer, nmpm and test_wafer queues. Spikey is accessible via spikey queue. To select a specific spikey, the --gres=SpikeyXYZ option is used.


As a side note the compute nodes have localtime set to UTC, so all logging times will be offset to local time. However, this should never be a problem as you should always work with non-local date/time (e.g. UNIX epoch, UTC or something similar).

FPGA & ASIC

Servers, software and libraries are managed by MD.

Servers

There are several login nodes for ASIC work, e.g., vmimas, vtitan, vrhea.

Using The Hardware

The HBP SP9 Guidebook provides introductions to both, the Spikey system and the BrainScaleS system.

Core Hardware Components

For the BrainScaleS system, the NMPM hardware specification provides detailed information; see Jenkins doc job "HBP Spec". TODO: Write something about hardware stuff.


Data Management

The policy on F9-specific data storage is:


KIP/F9 Data Storage
Mount Point Storage Backend Redundancy Backup Strategy Usable Size User Quota Typical Application
/afs/kip/user/USERNAME HDD RAID yes (1 version) 10G Distributed home directory
/scratch no Scratch/temp data; might be deleted at any time
/wang HDD RAID6 (2R) no 13T 0.3T General purpose
/ley HDD RAID6 (2R) yes (ADSM; 1 version) 7T 0.1T Important stuff (not too large!)
/loh 4x Archive HDD RAID5 (1R) no 16T 1T Archives of machines, homes, etc.
??? SSD no

Software Development

Most (all?) software developers work remotely on server machines. Tools like screen or tmux can keep your session open between reconnects.

Git

As a general rule, everything should be tracked in git. Period. If you hear git for the first time I highly recommend spending an hour going through a git tutorial of your choice (for example you can take the one [[]])

Whether your own work should be tracked in the group gitviz or not, should be decided together with your supervisor. But you will need to checkout at least the nnsapling repo for read access. If you wont track your own work in the gitviz, KIP offers a gitserver with upto 15 repos per user under git.kip.uni-heidelberg.de

License

Keep your code contributions (L)GPL-clean because we might want to publish it on a public web site. If you copy code from somewhere, verify license compatibility and mention the source as a code comment!

Code Review

For core software components (and other repositories involving multiple developers), we use gerrit as a code review tool.

F9's Gerrit Server

Continuous Integration

We encourage continuous and automatic testing, see F9's Jenkins Server.

Bugreports

Should be posted immediately in the redmine project associated with the module that produced the error. The title should be descriptive (it may only be changed by 'project managers' after creating the ticket). As a general rule, the traceback is necessary for the developer to find the actual bug, but the more relevant information are given the easier the fix. Ideally, you create a minimal example that reproduces the problem and upload the script, including the module's loaded.

F9's Redmine Server

Core Software Components

pyNN

An abstract model language to specify neural networks, independent of the actual simulator used. Typical usecase is to use it as frontend for simulations in nest or neuron. Documentation is available on its webpage (sort of at least) [1], in order to get started it is best to ask someone for a simple working script and trying to reproduce it with minor changes (ideally your supervisor has some propositions for you)

PyHMF

The BrainScaleS-hardware-specific PyNN implementation/backend. Maintainers: ECM

PyNN.hardware/PyHAL

The Spikey-hardware-specific PyNN implementation/backend. Maintainers: TP

Cake

Calibration framework for the BrainScaleS hardware. Maintainers: SS and MK

Euter/Ester

The C++-layer providing a representation of neuronal network descriptions (generated by PyHMF) -- used for BrainScaleS hardware. Maintainers: ECM, CK

Marocco

The translation layer which converts abstract neuronal networks into a hardware bit stream (i.e. a valid hardware configuration) -- used for BrainScaleS hardware. Maintainers: ECM, SS

StHALbe, hicann-system

StHAL, HALbe and hicann-system are the hardware access layers -- used for BrainScaleS hardware. Maintainers: ECM, CK

SpikeyHAL

Spikey hardware access layer. Maintainers: TP

HostARQ

The communication protocol stack for communication between the BrainScaleS hardware (FCP FPGAs) and host computers. Maintainers: (E)CM

ESS

The Executable System Specification is a BrainScaleS hardware simulator. Originally developed as a chip verification software by AG, it evolved into a neuronal network simulator (by BV). Maintainer: BV, OB

Modeling Software Packages

SBS

Allows for simple creation and sampling of and with Boltzmann machines of pyNN neurons. Tutorial Maintainer: OB