Visions:NoobiesTutorial: Difference between revisions

From KIP Wiki
⧼kip-jumptonavigation⧽⧼kip-jumptosearch⧽
No edit summary
No edit summary
Line 15: Line 15:
OB: Oliver Breitwieser
OB: Oliver Breitwieser
MD: Markus Dorn
MD: Markus Dorn
OA: Oscar Martín-Almendral
FK: Felicitas Kleveta


= KIP Institute Login =
= KIP Institute Login =
Line 207: Line 209:
===Servers===
===Servers===
There are several login nodes for ASIC work, e.g., <code>vmimas</code>, <code>vtitan</code>, <code>vrhea</code>.
There are several login nodes for ASIC work, e.g., <code>vmimas</code>, <code>vtitan</code>, <code>vrhea</code>.

= Administrative Stuff =

== Travel ==

* before travel: fill out travel request form (Dienstreiseantrag) and hand over to FK
* after travel: fill out reimbursement form (Dienstreiseabrechnungsformular) (provide invoices etc.) and hand over to FK
* when getting back the result: check if everything is correct, hand over to OA

The forms can be found here: https://www.kip.uni-heidelberg.de/service/verwaltung/form_forms (needs KIP login).


=Using The Hardware=
=Using The Hardware=

Revision as of 12:12, 7 August 2015

Abbreviated Names

 BK:  Björn Kindler
 JS:  Johannes Schemmel
 ECM: Eric Müller
 KHS:  Kai Husmann
 CM:  Christian Mauch
 CK:  Christoph Koke
 TP:  Tom Pfeil
 SS:  Sebastian Schmitt
 PM:  Paul Müller
 AG:  Andreas Grübl
 AH:  Andreas Hartel
 BV:  Bernhard Vogginger
 OB:  Oliver Breitwieser
 MD:  Markus Dorn
 OA:  Oscar Martín-Almendral
 FK:  Felicitas Kleveta

KIP Institute Login

The KIP login process is used for both, physical (i.e. keys) and virtual (i.e. user login stuff) access to KIP facilities. All F9 group services use some kind of KIP account/authentication. The login form can be found here: https://www.kip.uni-heidelberg.de/service/verwaltung/form_forms (needs KIP login).

Rules (as a hint for your supervisor) regarding the "LOGIN Computing" part:

  • state that the student should be added to kip_vision mailing list (IT)
  • same for kip visions wiki (IT)
  • chip design permission if needed
  • internshippers and bachelor students should be assigned to primary group F9_guests (field "3")
  • master students and beyond should be assigned to primary group F9 (field "3")


Meetings

Meeting Title Description Location Datetime Manager
Hardware F****Up Meeting Users and hardware guys discussing about current topics related to hardware usage. ENI Weekly, Monday at 10:30 SS (backup: PM)
PCB Meeting ENI Weekly, Monday at TODO AG
TMA Meeting ENI Weekly, Monday 13:30
HICANN DLS Meeting Chip design meeting ENI Weekly, Tuesday at 9:00 JS
FPGA Development Logs and agenda can be found here. ENI Weekly, Tuesday at 15:30 AG (backup: ECM).
ASIC Meeting Weekly, Wednesday at 15:00?
F9/Electronic Vision(s)' Group Meeting Mandatory group meeting. Logs and agenda can be found here ENI Thursday at 9:00 JS
Softies' Meeting Mandatory Softies meeting ;). Logs and agenda can be found here. ENI Thursday at 14:00 ECM
Journal Club TODO ENI Friday at TODO AH

Workplace

Computers

We use a Debian Jessie-based default installation. The configuration is automatically managed. In case of package requests, please ask KHS. Bugs and requests should be posted here.

Space

There are some places in the "Werkstatt" building (room 501) and in the container building. In case of a transient shortage of spaces, internshippers and bachelors are expected to "fill up" (i.e. they do not have a static assignment to a specific place) all available places.

Communication

To stay informed (and to provide information to others) you should join F9's irc server on bldviz. The main channels are:

Channel Topic
#softies Software and stuff
#hardies Hardware
#tma Modeling
#clusteraner Cluster usage and announcements


Clients:

  • hexchat (GUI)
  • weechat-curses (CLI)
  • Pidgin (GUI) / finch (CLI)

You can access the channels in Pidigin by creating a new account (Accounts -> Manage Accounts) using the Protocol IRC, with your KIP-username and -password. And then joining the chatroom (Buddies -> Join a Chat) when using your IRC account. (Leave the password blank)

Accounts

Typically, you will need the following accounts:

  • KIP-Account
  • Flagship/ex-BrainScaleS-Account (which provides Access to the gitviz-Repository)

If you work on the waferscale hardware or need access to the computer cluster ask your supervisor to write an email to ECM (+ get an introduction by your supervisor). If you do chip or FPGA development, you need ASIC permissions. Ask your supervisor to write an email to MD (+ get an introduction by your supervisor).

Flagship/ex-BrainScaleS

Flagship/ex-BrainScaleS accounts are managed by BK. Your supervisor should write him an email (+CC to you). This account is also needed for Redmine/GitViz access.

Redmine/GitViz Permissions

When your login works, please create ssh keys (as indicated in [1]) and upload the key to gitviz. Afterwards you should ask the project managers to add you to the needed repositories (your supervisor can help you). In case of ssh-key fails, please stick to the description in the symap2ic wiki (it's always the user's fault ;p). If you need further help, you may ask ECM/KHS/JS, the gitviz/redmine administrators.

Cluster Access

The F9 cluster is part of the BrainScaleS and HBP hardware systems. In times of idle nodes (i.e. the associated neuromorphic hardware parts are idle too), conventional software simulations can be run on the system. Please note, the cluster's main objective is controlling neuromorphic hardware and not number crunching. Having a KIP-Account gives you a home folder (distributed filesystem, AFS) on all machines running the default installation. However, this is not sufficient for cluster usage. You need a "wang" home and cluster access permissions. Both is managed by ECM and KHS. In case of a missing cluster_home you will see an error message when you lock into a compute server.

The frontend/login nodes are named:

  • ice
  • ignatz

and you can access them via (example for ice, but works with any other name too) ssh -X ice

Server Usage

The machines mentioned above are not the compute nodes themselves, but are only the frontend to access the compute cluster. Large jobs (i.e. CPU/IO hogs or long-running things) on the frontend node will be killed by the administrators. So for heavy work (read everything after the bug/syntax fixing) please dispatch execution to the cluster:

 srun [your command]

The default job gets 1 CPU and 2GB of RAM. If your code runs in parallel or needs more memory, please specify this (e.g. 4 CPUs, 8GB RAM):

 srun -c 4 --mem 8G [your command]

To run the job in background, please use:

 sbatch --wrap [your command]

This creates a slurm-[jobid].out log file containing all the console output.

In order to check the status of your jobs the command squeue can be used.

Jobs can be aborted (cancelled) by using:

 scancel [jobid]

Accessing the BrainScaleS hardware is only possible via the wafer, nmpm and test_wafer queues. Spikey is accessible via spikey queue. To select a specific spikey, the --gres=SpikeyXYZ option is used.

More details can be found here [2].

As a side note the compute nodes have localtime set to UTC, so all logging times will be offset to local time. However, this should never be a problem as you should always work with non-local date/time (e.g. UNIX epoch, UTC or something similar).

FPGA & ASIC

Servers, software and libraries are managed by MD.

Servers

There are several login nodes for ASIC work, e.g., vmimas, vtitan, vrhea.

Administrative Stuff

Travel

  • before travel: fill out travel request form (Dienstreiseantrag) and hand over to FK
  • after travel: fill out reimbursement form (Dienstreiseabrechnungsformular) (provide invoices etc.) and hand over to FK
  • when getting back the result: check if everything is correct, hand over to OA

The forms can be found here: https://www.kip.uni-heidelberg.de/service/verwaltung/form_forms (needs KIP login).

Using The Hardware

The HBP SP9 Guidebook provides introductions to both, the Spikey system and the BrainScaleS system.

Core Hardware Components

For the BrainScaleS system, the NMPM hardware specification provides detailed information; see Jenkins doc job "HBP Spec". TODO: Write something about hardware stuff.


Data Management

The policy on F9-specific data storage is:


KIP/F9 Data Storage
Mount Point Storage Backend Redundancy Backup Strategy Usable Size User Quota Typical Application
/afs/kip/user/USERNAME HDD RAID yes (1 version) 10G Distributed home directory
/scratch no Scratch/temp data; might be deleted at any time
/wang HDD RAID6 (2R) no 13T 0.3T General purpose
/ley HDD RAID6 (2R) yes (ADSM; 1 version) 7T 0.1T Important stuff (not too large!)
/loh 4x Archive HDD RAID5 (1R) no 16T 1T Archives of machines, homes, etc.
??? SSD no

Software Development

Most (all?) software developers work remotely on server machines. Tools like screen or tmux can keep your session open between reconnects.

Git

As a general rule, everything should be tracked in a version control system, in our case, that is git. Period. If you hear git for the first time I highly recommend spending an hour going through a git tutorial of your choice. Here are some examples:

Whether your own work should be tracked in the group gitviz or not, should be decided together with your supervisor. But mostly, you will need to checkout some repo for read access. If your own/private work is not for the trash, you should request a repository (or contribute to an existing one) on gitviz. Have your supervisor ask KHS for a repository.

License

Keep your code contributions (L)GPL-clean because we might want to publish it on a public web site. If you copy code from somewhere, verify license compatibility and mention the source as a code comment!

Code Review

For core software components (and other repositories involving multiple developers), we use gerrit as a code review tool. The server F9's Gerrit Server and a small tutorial can be found here [3].

Continuous Integration

We encourage continuous and automatic testing, see F9's Jenkins Server. Contact KHS for details.

Bug reports and redmine project management

Bugs should be posted immediately in the redmine project associated with the module that produced the error. The title should be descriptive (it may only be changed by 'project managers' after creating the ticket). As a general rule, the traceback is necessary for the developer to find the actual bug, but the more relevant information are given the easier the fix. Ideally, you create a minimal example that reproduces the problem and upload the script, including the module's loaded. F9's Redmine Server

Core Software Components

This section gives a very brief description of the main software packages developed for the NMPM-1 and Spikey systems.

PyNN

A python API to specify neural networks, independent of the actual simulator used. Typical use case is to use it as frontend for simulations in nest or neuron. Documentation is available on its webpage (sort of at least) [4], in order to get started it is best to ask someone for a simple working script and trying to reproduce it with minor changes (ideally your supervisor has some propositions for you)

PyHMF

The BrainScaleS-hardware-specific PyNN implementation/backend. Maintainers: ECM, CK

PyNN.hardware/PyHAL

The Spikey-hardware-specific PyNN implementation/backend. Maintainers: TP (backup: ECM)

Cake

Calibration framework for the BrainScaleS hardware. Maintainers: SS and MK

Euter/Ester

The C++-layer providing a representation of neuronal network descriptions (generated by PyHMF) -- used for BrainScaleS hardware. Maintainers: ECM, CK

Marocco

The translation layer which converts abstract neuronal networks into a hardware bit stream (i.e. a valid hardware configuration) -- used for BrainScaleS hardware. Maintainers: ECM, SS

StHALbe, hicann-system

StHAL, HALbe and hicann-system are the hardware access layers -- used for BrainScaleS hardware. Maintainers: ECM, CK

SpikeyHAL

Spikey hardware access layer. Maintainers: TP (backup: AG or ECM)

HostARQ

The communication protocol stack for communication between the BrainScaleS hardware (FCP FPGAs) and host computers. Maintainers: ECM, CM

ESS

The Executable System Specification is a BrainScaleS hardware simulator. Originally developed as a chip verification software by AG, it evolved into a neuronal network simulator (by BV). Maintainer: BV, OB

Modeling Software Packages

SBS

Allows for simple creation and sampling of and with Boltzmann machines of PyNN neurons. See Tutorial Maintainer: OB

TODO: Tutorial link is dead!