|
|
|
# Implementation Aspects of the goSDN Controller
|
|
|
|
|
|
|
|
|
|
|
|
## Why we do this in go
|
|
|
|
Because it rocks, but let's see afterwards what can be written here.
|
|
|
|
|
|
|
|
## Storing Information
|
|
|
|
|
|
|
|
Section XXX (Conceptual Design of a SDN Controller as Network Supervisor)
|
|
|
|
discusses the need to store information about for element inventories and
|
|
|
|
topology inventories.
|
|
|
|
|
|
|
|
### Element Inventories
|
|
|
|
|
|
|
|
Storing information about network elements and their properties is a relative
|
|
|
|
static process, at least when one considers potential changes over time.
|
|
|
|
Typically such network elements are added to a network and they will remain in
|
|
|
|
the network for a longer time, i.e., multiple minutes or even longer.
|
|
|
|
|
|
|
|
### Topology Inventory
|
|
|
|
|
|
|
|
Every network has one given physical topology (G<sub>physical</sub> ) and on
|
|
|
|
top of this at least one logical topology (G<sub>logical1</sub>). There may be
|
|
|
|
multiple logical topologies (G<sub>n+1</sub>) on top logical topologies
|
|
|
|
(G<sub>n</sub>), i.e., a recursion. Such logical topologies (G<sub>n+1</sub>)
|
|
|
|
can again have other logical topologies as recursion or other logical topologies
|
|
|
|
in parallel.
|
|
|
|
|
|
|
|
A topology consists out of interfaces, which are attached to their respective
|
|
|
|
network elements, and links between these interfaces.
|
|
|
|
|
|
|
|
Mathematically, such a topology can be described as a directed graph, whereas
|
|
|
|
the interfaces of the network elements are the nodes and the links are
|
|
|
|
the edges.
|
|
|
|
|
|
|
|
G<sub>physical</sub> ist a superset of G<sub>logical1</sub>.
|
|
|
|
|
|
|
|
The topology inventory has to store the particular graph for any topology and
|
|
|
|
also the connections between the different levels of topologies. For instance,
|
|
|
|
the G<sub>logical1</sub> is linked to G<sub>physical</sub>. (needs to be clear
|
|
|
|
if changes in n-1 graph has impact on n graph).
|
|
|
|
|
|
|
|
For further study at this point: Which type of database and implementation of
|
|
|
|
databases should be used to store the different topology graphs and their
|
|
|
|
pontential dependencies? How should the interface between gosdn and this
|
|
|
|
database look like?
|
|
|
|
|
|
|
|
Here is an attempt to describe the above text in a graphical reprensetation (kinda of...not perfect yet):
|
|
|
|
|
|
|
|
```mermaid
|
|
|
|
graph TB
|
|
|
|
|
|
|
|
SubGraph1 --> SubGraph1Flow
|
|
|
|
subgraph "G_logical1"
|
|
|
|
SubGraph1Flow(Logical Net)
|
|
|
|
Node1_l1[Node1_l1] <--> Node2_l1[Node2_l1] <--> Node3_l1[Node3_l1] <--> Node4_l1[Node4_l1] <--> Node5_l1[Node5_l1] <--> Node1_l1[Node1_l1]
|
|
|
|
end
|
|
|
|
|
|
|
|
subgraph "G_physical"
|
|
|
|
Node1[Node 1] <--> Node2[Node 2] <--> Node3[Node 3]
|
|
|
|
Node4[Node 4] <--> Node2[Node 2] <--> Node5[Node 5]
|
|
|
|
|
|
|
|
Net_physical[Net_physical] --> SubGraph1[Reference to G_logical1]
|
|
|
|
|
|
|
|
end
|
|
|
|
```
|
|
|
|
|
|
|
|
### Potential other Inventories
|
|
|
|
|
|
|
|
There may be the potential need to store information beyond pure topologies,
|
|
|
|
actually about network flows, i.e., information about a group of packets
|
|
|
|
belonging together.
|
|
|
|
|
|
|
|
## Database
|
|
|
|
A database will be used for the management and persistence of network
|
|
|
|
topologies and their associated elements within goSDN.
|
|
|
|
|
|
|
|
Since network topologies are often depicted as graphs, it was obvious to stick
|
|
|
|
to this concept and, also due to their increasing popularity, to use a graph
|
|
|
|
database. After a more intensive examination of graph databases it was found
|
|
|
|
that they (with their labels, nodes, relations and properties) are well suited
|
|
|
|
for a representation of network topologies.
|
|
|
|
|
|
|
|
The first basic idea was to create different single graphs representing the
|
|
|
|
different network topologies and label each node and edge to ensure a clear
|
|
|
|
assignment to a topology.
|
|
|
|
This would mean that nodes and edges of a graph have 1...n labels.
|
|
|
|
Therefore if you want to display a simple network topology in a graph, you can
|
|
|
|
display the different network elements as individual nodes and the edges between
|
|
|
|
network elements as their respective connections, such as Ethernet.
|
|
|
|
This works with both physical and logical topologies, which are described in
|
|
|
|
more detail [here](#topology-inventory).
|
|
|
|
So a simple topology in a graph database could look like shown below.
|
|
|
|
|
|
|
|
```mermaid
|
|
|
|
graph TD
|
|
|
|
A[Node 1 - Label: 'Host,physical'] -->|Ethernet - Label: 'physical'| B[Node 2 - Label: 'Hub,physical']
|
|
|
|
C[Node 3 - Label: 'Host,physical'] -->|Ethernet - Label: 'physical'| B
|
|
|
|
B -->|Ethernet - Label: 'physical'| D[Node 4 - Label: 'Host,physical']
|
|
|
|
B -->|Ethernet - Label: 'physical'| E[Node 5 - Label: 'Host,physical']
|
|
|
|
```
|
|
|
|
|
|
|
|
For this purpose some experiments with the [Redis](https://redis.io/)-Database
|
|
|
|
module [`RedisGraph`](https://oss.redislabs.com/redisgraph/) were carried out
|
|
|
|
first. The basic implementation was possible, but the function of assigning
|
|
|
|
several labels to one node/edge is missing (originally we considered this to be
|
|
|
|
indispensable especially to map different topologies).
|
|
|
|
For this reason we looked around for an alternative and with
|
|
|
|
[neo4j](https://neo4j.com/) we found a graph database, which gives us the
|
|
|
|
possibility to label nodes and edges with a multitude of labels and offers a
|
|
|
|
wide range of additional plugins such as [apoc](https://neo4j.com/labs/apoc/).
|
|
|
|
|
|
|
|
### neo4j
|
|
|
|
TODO: add a little description for neo4j in general
|
|
|
|
|
|
|
|
#### Implementation With neo4j
|
|
|
|
The current implementation offers the possibility to persist different network
|
|
|
|
elements (e.g. devices, interfaces...) and their physical topology and mainly
|
|
|
|
serves to represent the prototypical dataflow of goSDN to the database.
|
|
|
|
The following figure shows our first idea of a persistence of network
|
|
|
|
topologies with neo4j (to save space, only the labels were included).
|
|
|
|
```mermaid
|
|
|
|
graph TD
|
|
|
|
PND[PND 1]
|
|
|
|
A --> |belongs to| PND
|
|
|
|
B --> |belongs to| PND
|
|
|
|
C --> |belongs to| PND
|
|
|
|
D --> |belongs to| PND
|
|
|
|
E --> |belongs to| PND
|
|
|
|
|
|
|
|
A[Label: 'Host,physical,logical1'] --> |Label: 'physical'| B[Label: 'Hub,physical,logical1']
|
|
|
|
D[Label: 'Host,physical,logical1'] --> |Label: 'physical'| B
|
|
|
|
B --> |Label: 'physical'| C[Label: 'Host,physical,logical1']
|
|
|
|
B --> |Label: 'physical'| E[Label: 'Host,physical,logical1']
|
|
|
|
|
|
|
|
A --> |Label: 'logical1'| B
|
|
|
|
B --> |Label: 'logical1'| C
|
|
|
|
C --> |Label: 'logical1'| D
|
|
|
|
D --> |Label: 'logical1'| E
|
|
|
|
E --> |Label: 'logical1'| A
|
|
|
|
```
|
|
|
|
|
|
|
|
The basic idea is to assign the different network elements to a specific
|
|
|
|
Principal Network Domain (PND). The different topologies are represented by a
|
|
|
|
neo4j relationship between the network elements that are stored as neo4j nodes.
|
|
|
|
However, with this current variant it is not possible, as required in
|
|
|
|
[Topology Inventory](#topology-inventory), to represent topologies that are hierarchically
|
|
|
|
interdependent, since neo4j does not allow relations to be stored as properties
|
|
|
|
(as described [here](https://neo4j.com/docs/cypher-manual/current/syntax/values/#structural-types)).
|
|
|
|
Furthermore, multiple links between the same nodes which belong to the same
|
|
|
|
topology are difficult to represent, since this model only provides a single
|
|
|
|
link between nodes of a certain topology.
|
|
|
|
|
|
|
|
For the reason mentioned above, a more complex idea for persistence is available
|
|
|
|
for the further development, which hopefully allows us to persist and map
|
|
|
|
network elements, PNDs and topologies with all their hirarchical dependencies.
|
|
|
|
|
|
|
|
The following figure tries to visualize this idea.
|
|
|
|
```mermaid
|
|
|
|
graph TD
|
|
|
|
subgraph "dependencies of topologies"
|
|
|
|
logical1 -->|related_to| physical
|
|
|
|
logical5 -->|related_to| physical
|
|
|
|
logical3 -->|related_to| logical1
|
|
|
|
end
|
|
|
|
|
|
|
|
subgraph "every node belongs to a specific PND"
|
|
|
|
Node1 -->|belongs_to| PND
|
|
|
|
Node2 -->|belongs_to| PND
|
|
|
|
Node3 -->|belongs_to| PND
|
|
|
|
Node4 -->|belongs_to| PND
|
|
|
|
Node5 -->|belongs_to| PND
|
|
|
|
end
|
|
|
|
|
|
|
|
subgraph "relationship between nodes (nodes can be linked by 0...n links)"
|
|
|
|
lp2[link_physical]
|
|
|
|
lp3[link_physical]
|
|
|
|
lp4[link_physical]
|
|
|
|
lp5[link_logical1]
|
|
|
|
lp2 --> |connects| Node4
|
|
|
|
lp2 --> |connects| Node2
|
|
|
|
lp3 --> |connects| Node2
|
|
|
|
lp3 --> |connects| Node3
|
|
|
|
lp4 --> |connects| Node2
|
|
|
|
lp4 --> |connects| Node5
|
|
|
|
lp5 --> |connects| Node1
|
|
|
|
lp5 --> |connects| Node2
|
|
|
|
end
|
|
|
|
|
|
|
|
subgraph "links are part of a topology"
|
|
|
|
lp1[link_physical]
|
|
|
|
lp1 --> |connects| Node1
|
|
|
|
lp1 --> |connects| Node2
|
|
|
|
lp1 --> |part_of| physical
|
|
|
|
end
|
|
|
|
|
|
|
|
subgraph "links can contain 1...n layers"
|
|
|
|
lp2 --> |contains| ODUH
|
|
|
|
lp2 --> |contains| OTUCN
|
|
|
|
lp2 --> |contains| ODUCN
|
|
|
|
end
|
|
|
|
```
|
|
|
|
The basic structure explained in the upper part remains the same.
|
|
|
|
However, the relations, which previously served as links between the respective
|
|
|
|
nodes, now become **separate nodes**. These nodes now act as links between the
|
|
|
|
respective network elements and are part of a network topology (which itself
|
|
|
|
is represented as a separate node in the graph). By this change, network
|
|
|
|
topologies can now be interdependent. Furthermore, as can be seen in the figure
|
|
|
|
above, you can add additional nodes to the link nodes by using this scheme.
|
|
|
|
So a physical link between two nodes could e.g. **contain** several cables.
|
|
|
|
All other information can be stored in the properties of the respective nodes/edges.
|
|
|
|
|
|
|
|
The above idea is not yet approved and there are still open questions.
|
|
|
|
- Is there a better solution for the assumption that there are several different physical connections between the same nodes than separate link nodes between them?
|
|
|
|
- Can topologies run over different PNDs -> membership to different PNDs?
|
|
|
|
- Where can we benefit from using different layers? (e.g. possible saving of unnecessary relations between nodes)
|
|
|
|
- Do the sdn controllers provide us with the necessary information to map the topologies in this way?
|
|
|
|
- ...
|
|
|
|
|
|
|
|
## YANG to code
|
|
|
|
|
|
|
|
The base of the development of goSDN are YANG modules. The RESTful API used for RESTCONF is defined in an OpenAPI 2.0 file. This API documentation is generated from the YANG module. The YANG module description is also used to generate code stubs for the goSDN RESTCONF client.
|
|
|
|
|
|
|
|
\includegraphics{gfx/yang-schematics.pdf}
|
|
|
|
|
|
|
|
### YANG
|
|
|
|
|
|
|
|
YANG defines an abstract network interface. It is the foundation of the RESTCONF protocol. Several code generators exist to generate code stubs from a given definition.
|
|
|
|
|
|
|
|
### OpenAPI
|
|
|
|
|
|
|
|
OpenAPI - formerly known as Swagger - is a framework that defines RESTful APIs. We use OenAPI documentations to define the RESTCONF server implementation of the cocsn YANG modules.
|
|
|
|
|
|
|
|
### Toolchain
|
|
|
|
|
|
|
|
We use 3 different tools for the code generation workflow. For the RESTCONF server `yanger` is used to generate the OpenAPI documentation from the YANG file. `go-swagger` is used to generate a RESTCONF server with stubs for the REST calls.
|
|
|
|
|
|
|
|
The RESTCONF client stubs used by goSDN are generated from YANG files using YGOT.
|
|
|
|
|
|
|
|
### Dependencies
|
|
|
|
|
|
|
|
For now we can only use the OpenAPI 2.0 standard. This is because `go-swagger` does not support OpenAPI 3.0 specifications yet.
|
|
|
|
|
|
|
|
## Storing Information
|
|
|
|
|
|
|
|
This section keeps by now some loose thoughts about what information has to be stored how and where.
|
|
|
|
|
|
|
|
There seem to be two classes of information to be stored in the controller:
|
|
|
|
* short-living information, such as, current configured network flows or obtained network configuration out of use case #1 (CoCSN)
|
|
|
|
* long-time information, such as, information about principle network domains, elements in such a domain if directly learned from SBI, etc
|
|
|
|
|
|
|
|
Long-time information should be persistenly stored in the database and survive reboots of goSDN etc. Short-Living information doesn't have to survive reboots of goSDN
|
|
|
|
|
|
|
|
|
|
|
|
### Some more details for implementation for the database(s)
|
|
|
|
|
|
|
|
We define the principle network domain (PND) and each piece of information of any PND has to be stored in relation the particular PND.
|
|
|
|
|
|
|
|
Specification of a PND:
|
|
|
|
* Human readable name of PND
|
|
|
|
* Textual description for further information
|
|
|
|
* Set of supported Southbound-Interfaces, e.g., RESTCONF, TAPI, OpenFlow etc
|
|
|
|
* Physical Inventory Network Elements, hosts and links, pontentially only the SBI SDN controller
|
|
|
|
|
|
|
|
A PND entry must be explicitly generated, though some information can be automatically be generated, e.g., the physical inventory for use-case #1 (CoCSN) would mean that the information about the SBI domain specific SDN controller is entered. |