Our engineers are hard at work building a better web experience. Articles, solutions, tips and tech information will be published here as we update information into the fresh new So if you don't see Helpful information here right now, chances are you will tomorrow.


Does Intersect offer training?

Delivered by Intersect's team of experts, training courses are customised and are regularly updated. On campus lab based courses provide practical and research-relevant hands-on exercises.

Head on over to our website to read more.

What's changed in

We have revamped and switched the underlying technology that powers it from ServiceNow to FreshDesk. This means a few things have changed. Here are some handy pointers to assist you with the transition.

  1. All of your tickets (currently active and closed) have new numbers. Previous ticket numbers are of the format INC0010039, the new format means this ticket number will now be INC-10039, so it's easy just take the last 5 digits of your old ticket number and you have your new ticket number. Just search for the new ticket and you'll find it's still there.
  2. For any active tickets over the transition period you will have received an email advising you of your new ticket number. This email also has a link to the ticket in
  3. If you previously viewed your Space Plans and Space Products in, you will notice they are missing at the moment We are working on a replacement solution as we migrate all processes across, so in the meantime if you want to know this information please let us know and we will work with you on an interim solution. 
  4. There's a brand new "Solutions" knowledge base. To begin with it will be pretty sparse, but we'll be improving it week after week as we move more information into the system.
We are still refining the look, feel and functionality so please bear with us for a while while we iron out the kinks!

Kind regards,

The team

Space and Time email notification lists

Space and Time email notification lists.

This article describes the steps for subscribing and unsubscribing to Intersect’s email notification lists. These notification lists advise customers about our planned outages. By default, all Space and Time customers are automatically subscribed to our notification lists.

How can I subscribe to Space and Time email notification lists if I am a Time or Space user?

Simply send an email to: and/or

(This email needs to come from the requester’s email address and does not need to contain any specific subject or body)

How can I unsubscribe from Space and Time email notification lists?

Simply send an email to: and/or

(This email needs to come from the requester’s email address and does need not to contain any specific subject or body)

Subscribe to Space Notifications List Example

1. Send an email to

2. Receive email confirmation:

3. Click the Join This Group button

4. Enter your email address in the pop-up window.

5. You have now subscribed to the email notification list

Unsubscribe from Space Notifications List Example

1. Send an email to

2. Receive email confirmation


3. You have now unsubscribed from the email notification list

Intersect's Internet Connectivity
Intersect's connection to the Internet is provided by AARNet, the supplier of Internet services for Australian Universities, schools, research organisations and the CSIRO.

Intersect has two 10Gbit/second links to the Internet. The two links employ diverse paths where each link independently connects to AARNet. In this case each of the two Intersect edge routers run to a different AARNet ‘Meet Me’ room at our datacentre and connection from there via independent connections to the AARNet backbone at Macquarie Park and Roseberry.

The use of two connections increases reliability and access to Intersect services. Interruption to one link causes all traffic to automatically fail over to the remaining link. Use of diverse paths ensures that a single incident is extremely unlikely to affect both links, for example civil works cutting through a fibre cable. It also gives us increases capacity when both links are working.
Enhancing Intersect's Reliability through Hardware Redundancy and High Availability

Technical Architecture

Following is a brief description of the hardware and software redundancy employed within the Intersect eResearch Nexus. The goal is to use commodity hardware and take advantage of all features that enhance the reliability and remove single points of failure where possible. All services are configured with an Active/Active architecture and the various components have been selected and designed to work this way.

A list is included for some typical Failure / Recovery Scenarios.

  • Every physical device, both servers, storage and switches, has dual power supplies.
  • Each rack has two independent power circuits fed from separate data centre supplies.
  • Each power supply is connected to a different power circuit.
  • The data centre provides uninterruptible power supply, air conditioning and cooling services.
  • There are two core routers.
  • Each core router has a link to AARNET. Both links are active and traffic automatically switches over to the other router on failure.
  • The systems use two network switches per rack, linked into a single logical stack.
    • The stack has at least one uplink per switch to the core routers.
    • If an uplink port fails the other uplinks automatically carry the traffic.
  • Each server uses two network ports in an aggregated link. The individual ports are connected to separate switches.
  • If a network port has an error, the other port continues to carry the traffic.
  • If a switch fails, the server uses the other port to carry the traffic.
Space Storage
  • Space Storage consists of a number of storage controllers, disk arrays and servers.
  • The Storage Area Network (SAN) uses two fibre channel switches.
    • All disk arrays, controllers and servers use at least two fibre channel connections for the Storage Area Network.
    • All disk arrays, controllers and servers have connections to both fibre channel switches.
  • There are three servers providing Network File Services to clients.
    • Each server complies with the power and network design above.
  • NFS automount is used on clients with all three servers as a target.
    • At mount time the most suitable NFS server is selected.
  • When data is written to tape a copy is written to two separate tapes. Either copy can be used to read back the file when required.
  • The Openstack control infrastructure consists of several virtual machines running across three physical servers.
  • Each of the physical servers is in a different rack.
  • The various openstack services are distributed across these three servers.
  • Mysql database cluster has three virtual machines, one running on each server. Ha-proxy provides access to an active database.
  • Rabbit Message Queue has three virtual machines, one running on each server. Services are configured to query all three systems.
  • There is one availability zone with two cell controllers, running on separate servers. The cell controller manages scheduling of new virtual machines and reporting of statistics, and various management functions to the core Openstack services.
  • The compute servers have local storage for hosting virtual machines.
  • Operating system disks all use RAID1.
  • Data disks for virtual machines are RAID6, allowing for two disk failures in each node before loss of service.
  • There is remote console access to allow reboot or recovery from system crashes without attending the site.
  • There is extensive logging and monitoring to track hardware warnings and faults.
  • The operating system installation for all physical and virtual systems is automated using network boot and templates to ensure rapid and consistent installation.
  • The software configuration of all systems is managed using Puppet, so configurations are automated and consistent across all systems.
  • Nagios configuration is generated automatically from the puppet configuration. Adding a new service or system to puppet causes a corresponding Nagios test to be created and added automatically.

Failure / Recovery Scenarios

Failure Scenario

Continuity of Service

What happens in the infrastructure and what we do to recover it….

One of the power supplies to a physical server, disk array or switch fails


The other power supply continues to carry the load. Normal operation continues with no loss of service. A service call is raised with the vendor to replace the failed part.

One of the network ports goes down


The other port in the aggregated link continues to carry the traffic. Normal operation continues with no loss of service. A service call is raised with the vendor to replace the failed part.

One of the network switches or routers fails


All servers are connected to more than one switch so the other port in the aggregated link continues to carry the traffic. Normal operation continues. A service call is raised with the vendor to replace the failed part.

Comms link to AARNet lost


The network routes automatically change to use the alternate AARNet link. A brief pause, typically  less 30 seconds, in traffic flow may occur, then traffic flow continues without service interruption.

Compute server goes offline


Server recovery may require restarting services or rebooting. After system update/verification VMs are manually restarted. Intersect maintains a pool of spare servers, so in the event of a likely extended outage we would swap the system and/or reallocate the VMs to other servers, whichever is most appropriate.
If there is a hardware failure then a service call is raised with the vendor.

A hard disk fails


RAID allows the remaining disk(s) to provide service. A service call is raised with the vendor to replace the failed part. The new disk can be installed without interruption to service.

Space: Vast storage
SpaceVault and TimeVault RPO and RTO Service Levels

Service Levels

As with any backup service, SpaceVault and TimeVault service levels apply to standard collections that are within standard size thresholds and for standard restorations, as described in the SpaceVault and TimeVault brochures - see and

Recovery Point Objective

1 day

That is, the maximum amount of data missing after a restore from the most recent backup is up to one elapsed day’s worth of changes.

Recovery Time Objective

1 business day

That is, the maximum amount of time elapsed before restored data is available to you is up to one business day.


Object storage, for example SWIFT or S3.

Virtual machine operating system/image snapshot

External storage, for example collections.

TimeVault does not attempt file or block de-duplication.

Last Reviewed: March 2019 

Space is big, really big is a large scale, high performance, collaborative, and cost effective digital storage system specially tailored, designed and constructed by researchers for researchers. Space offers continuously growing capacity of up to 50 petabytes of fast, reliable and safe active and archive data retention. 

Read more about Space on our website.


SpaceShuttle is a storage solution that securely transfers, stores, manages and shares large amounts of active research data. Data is read, written, copied or moved over the Internet at maximum speed using the most efficient transmission technology available. SpaceShuttle uses a combination of disk and tape with two copies stored on tape so you can be confident of its integrity.


DeepSpace is a low cost storage solution for researchers requiring long-term, networked data storage. It is ideally suited for large amounts of data and complete collections or published datasets that are accessed infrequently. DeepSpace primarily uses tape storage to keep costs low and has longer retrieval times than Intersect's active storage - SpaceShuttle and SpaceLab. Three synced copies of your data are stored on tape and disk at two different locations, so you can be confident of integrity and security. 


SpaceLab puts your research software right next to your data in a secure hosted cloud without any hardware, upfront cost or lengthy delays. SpaceLab links Intersect Space and Time compute to provide a powerful and stable platform to run tailored applications in an environment that is just right for your unique research.

Time: Fast computing
Research at light speed

Make your computing super with Intersect Australia’s shared, high performance cluster, virtual and cloud environments. offers big computing platforms for research. Researchers can choose between parallel processing for maximum performance, cloud computing for horizontal scale, or dedicated hosting for domain-specific applications.

Find out more about Time on our website. And if you have any questions please get in touch. 

LocalTime Service Catalogue

See LocalTime Service Catalogue

Getting started with

Create a new project for your resources

The compute and storage resources you ask for are supplied by Intersect through our partner the National Compute Infrastructure (NCI). Apply for a new project using the Mancini resource allocation system - instructions are available here in, the NCI user guide. 

Outside of the normal ICMAS and NCMAS resource preallocation rounds you can typically ask for up to 20k SUs per quarter. Your project can get additional resources during the mid-term adjustment in the middle of each calendar quarter. The research argument can be brief.

Connecting to Raijin
To connect to Raijin you'll need a Secure Shell (SSH) client. If you're on Linux or Mac OS X, chances are that you can just open a terminal and issue one of the following command:


When prompted, enter your username and password. If you're running Windows, you'll need to download an SSH client. A good free option is Putty.

To transfer files to and from your machine, you'll need a File Transfer Protocol (FTP) client that supports Secure-FTP (SFTP). You can do this from the command line on Linux, Mac OS X, and Windows:


But you might find it easier to use a graphical FTP program, such as FileZilla.

Software environment and availability on

Raijin Software Registry

A complete list of all software installed is available at

Please note that software packages marked with a yellow dot carry license restrictions. Please contact us to establish whether you can use this software.

Software installation and requests

In addition to the standard catalogue, you are also welcome to install software yourself in your home directory.

If you would like to have new software added to the standard catalogue, please email and specify the download site.

Setting up software environments

To set up the environment for a software package you need to use the module system. To see the exact names of the modules visit the links given above or to get a list use the command:

module avail

To load, for example, the latest version of the Intel compilers on Raijin use

module load intel-fc/

This sets up your environment (variables and path). The reason for using this module system is to allow for different versions of the same software package.

Other useful basic module commands include:

  • A list of already loaded modules: module list
  • To show you what a module does: module show package-name
  • To unload a package module unload <package> ()

Getting started with Nectar Cloud

There are a number of important pieces of information relating to NeCTAR VMs, including the size of the instance, storage options, access and security, and reliability.

How do I access Nectar Cloud?

To access the research cloud, you will need to log in to its Dashboard via AAF using your university login and password. Once logged in, you will see a Dashboard with your ‘personal trial project’ (which is identified with a ‘pt-’ code). If you have additional projects, such as project allocations, these will be listed in the Dashboard as well.

Applying for Project Resources

To obtain a project allocation, you need to submit a request through the NeCTAR Dashboard. Allocation requests are reviewed by an allocation committee for merit, suitability to cloud use, and available capacity of the cloud and, for Intersect, authorised by a participating member organisation.

When submitting the online form to request a project allocation, please provide as much information as possible regarding your project to expedite the process. You'll be asked to select the flavour that you will require. An application for a small or a medium-sized allocation is more likely to be approved than an application for an XXL sized allocation. For this reason, explain why you need the size of server you are requesting, with as much information as possible.

Generating SSH credentials

Almost every VM requires command line access using the Secure Shell (ssh) protocol. This means before creating your first virtual machine you will also need to create an SSH keypair. A keypair works like a lock and key and means that you do not need a password to log in, as long as you have your private half of the key pair (the key), and the server has the public half of the key pair (the lock). SSH keypairs are very secure, as you never transmit a password over the web to log in to the server.

Keypairs can be generated using the NeCTAR dashboard in the ‘access and security’ tab. When you have generated a keypair, the public key will be written to the VM, and the private key will be available for you to download and place in your .ssh directory. You will then need to configure your machine to use it to connect to the VM. If you already have a keypair or if you create one on your own machine, you may upload your public key to the dashboard. Once you have a keypair in your NeCTAR account, you will be able to use the public key for any instance you build, and the private key on any machine you wish to connect from.

You must create/upload your keys prior to launching a VM, otherwise NeCTAR will be unable to write the public key into the .ssh directory of the VM. You will also be unable to connect from any client device. For assistance in creating and using SSH keypairs, see the SSH keypairs technical guide.

Nectar Cloud Flavours

Project resources are consumed as 'flavours' of virtual computing. There are four classes; Balanced (m3), RAM optimised (r3), CPU optimised (c3) and Tiny (t3). Full information is available on the Nectar Flavours page of the Nectar knowledgebase.

Researchers initially receive a personal trial project with 2 VCPUs allocated for 3 Months. This means you can run 2 Small or 1 Medium flavours for three months or 1 small VM for a total of 6 months.

Creating a Virtual Machine

Building an instance, as part of either your personal trial or a project allocation, is done through the Dashboard. A wizard will guide you through the process, which consists of selecting the ‘flavour’ (size) of the machine, the image to boot from, the ‘availability zone’, which is the node it will be hosted on, and some additional details. Once built, a VM can be imaged, terminated, shut off or rebooted as necessary, and more instances can be deployed, as long as your total usage does not exceed the resources available in your project.

Attaching Storage

You can apply for persistent block storage (volumes) for their allocations. Volumes work like network attached storage devices; they can be mounted and unmounted from VMs within your project allocation (as long as they are within the same availability zone), the data remains persistent if not attached to a VM, and they can be backed up with snapshots.

NeCTAR also offers object storage for higher reliability, although configuring object storage requires more advanced skills. The benefits of object storage are that the data can be distributed across many availability zones, and can be accessed via http tools even when not attached to any instance.

Connecting across the Internet

In  order to connect to and work on your VM, you will need to configure security groups (firewall rules), which enable traffic through certain ports for different kinds of access. The default security groups for project trials are:

  • SSH opens tcp port 22 to traffic from all sources (for logging in via ssh)
  • HTTP opens tcp ports 80 and 443 to traffic from all sources (for web servers)
  • ICMP opens all ICMP traffic from all sources (to allow pinging your VMs IP address).

For project allocations, all security groups need to be configured. See the Security Groups technical guide for assistance. Security groups can be configured and created while a VM is running. Changes will take effect immediately.

Reaching your quota

When a project’s resource quota has been reached, an email is sent to your account, warning that that project's VMs will be terminated on a specific expiry date. At this point, you should ensure that you have a backup of your data and make a snapshot of the VM. If you still need the machines to run after the end-date, you will need to submit an allocation request, or amend the existing request with a new date.

Energy: Extreme service
May our force be with you

We are here to add our human Energy to your research. We improve researcher productivity with research computing solutions. Our eResearch Analysts support research at your organisation, providing expertise in data storage and management, HPC and cloud computing, IT planning and grants. Our software engineers develop tools, mobile and web applications to help researchers share and collaborate. Our trainers teach research tailored technology courses. 

Get a great overview of our services offerings by visiting our website. And if you have any questions please get in touch.

Data: Smart science
Live open and prosper

Accelerate your research using domain-based data platforms, services and expertise. 

Data. It surrounds us. It educates us. It changes us. Above all, it grows fast, really fast. Changes in the technology ecosystem are revolutionising the data landscape. Research data proliferation is challenging researchers across all fields of expertise. 

Find more information on our website.

Intersect Processes

Intersect Generic Processes
Incident Management Process
See Intersect Incident Management Process
Your browser is not supported. Please upgrade your browser.