Skip to content
Snippets Groups Projects
Commit e970c0de authored by Malte Bauch's avatar Malte Bauch
Browse files

Merge remote-tracking branch 'origin/database-doc-start' into...

Merge remote-tracking branch 'origin/database-doc-start' into 42-update-database-in-order-to-achieve-the-long-term-goals-of-the-project
parents 9fab0da1 01f9463f
No related branches found
No related tags found
3 merge requests!90Develop,!62Resolve "Update database in order to achieve the long-term goals of the project",!53V.0.1.0 Codename Threadbare
# Introduction # Introduction
Lorem ipsum at nusquam appellantur his, labitur bonorum pri no [@dueck:trio]. His no decore nemore graecis. In eos meis nominavi, liber soluta vim cu. Sea commune suavitate interpretaris eu, vix eu libris efficiantur. Data networks consists out of a variety of different network elements, link types, end hosts, services and requirements of such services. Further data networks consists not only of a single plane, but have different (logical) networking planes that have different tasks within any data network, i.e., the control plane, data plane and the network management plane. Keeping track of the different elements, links, hosts, services, their interactions, their runtime behavior on the 3 networking planes is a non-trivial tasks that is usually subsumed under the very broad term of network operations.
There are different approaches for network operations that are not only divided by their logical distinction but also how an implementer, typically network equipment vendor, is implementing the network elements and the particular operations.
We outline two basic approaches to network operation:
1. fully integrated network operations of all networking planes, i.e., usually called the traditional approach.
2. separation of control- and data planes, i.e., usually called the Software Defined Networking (SDN) approch, though there have been implementations of this concept earlier than SDN with other names, e.g., Forwarding and Control Separation (ForCeS) and others.
## Motivation ## Motivation
## Overarching Project Goals ## Overarching Project Goals
* Keep It Simple, Stupid (KISS) * Keep It Simple, Stupid (KISS)
* Reuse existing technologies bits wherever possible * Reuse existing technologies bits wherever possible if those are stable, i.e., documented, maintained etc, on a long time scale
* Integrate state-of-the-art technologies and methodologies * Integrate state-of-the-art technologies and methodologies
* Document, Document, Document
* Automate almost everything right from the beginning * Automate almost everything right from the beginning
* Document, Document, Document
* be an excellent citizen: test, test, test * be an excellent citizen: test, test, test
* no hacks!
Some unsorted thoughts the be ordered yet: Some unsorted thoughts the be ordered yet:
...@@ -22,4 +31,30 @@ Some unsorted thoughts the be ordered yet: ...@@ -22,4 +31,30 @@ Some unsorted thoughts the be ordered yet:
* modules should be loaded (or unloaded) during runtime of the controller core * modules should be loaded (or unloaded) during runtime of the controller core
## Use Cases to be considered
The development of a general purpose SDN controller is not the primary goal at this early stage of the project.
Instead there are two use cases to be considered in the implemenation works that are currently ongoing:
* Primary: optical domain SDN-controller for the CoCSN project
* Secondary: SDN-controller for our local labs to manage an Ethernet-based lab environment
### Primary: optical domain SDN-controller for the CoCSN project
For this use case we initally do not consider the direct control of optical network elements, e.g., Optical Add-Drop Multiplexer (OADM) but we focus on optical network domains managed by another (SDN) controllers. The goSDN controller communicates with this domain controller and can request information about the optical network elements, the links between them and the optical and logical configuration of the network domain.
In a second step, the goSDN controller has to communicate with multiple domain controllers and has to find potential interchange points between these multiple domains. This is the preparation for a later step in this use case, when the goSDN controller has to find a network path between two end-points across multiple optical domains, including backup paths.
The intention here is to use an existing SDN southbound interface, very likely based on RESTCONF.
### Secondary: SDN-controller for our local labs to manage an Ethernet-based lab environment
For this use case we consider one of our local labs, e.g., either the telecommunications or networking lab, and how this lab with all its networking parts can be managed by the goSDN controller. In this case, the controller has to learn about all (network) elements, the links and the topology by obtaining all the required information and its own topology computation. This will require an interface between goSDN and the network components that is potentially beyond the typical SDN southbound interfaces.
## Structure of this Memo ## Structure of this Memo
This memo starts with this introduction that sets the stage for the theoretical underpinings of the SDN-controller
and the acutal implementation (and the various choice for this). Chapter 2 discusses the related work and chapter 3
outlines the theoretical foundations related to the control of networks and their relation to SDN. Chapter 4 uses
the output of Chapter 3 to define the conceptual design of the goSDN controller and some discussions about the pro
and cons of conceptual design. Chapter 5 describes the actual design of the current goSDN implementation and is
meant to be a compendium for the source code.
...@@ -61,3 +61,18 @@ Some conceptual building blocks for a network supervisor: ...@@ -61,3 +61,18 @@ Some conceptual building blocks for a network supervisor:
* **Northbound Interface (SBI)** * **Northbound Interface (SBI)**
* **East-West-bound Interface (SBI)** * **East-West-bound Interface (SBI)**
## Applying Changes to What Plane?
Some basic thoughts to dissect how different approaches are applying changes to the various planes.
### Changes to the Control Plane
### Changes to the Data Plane
This is the use case for the SDN approach: A so-called SDN-controller applies policy rules to the data plane. These policy rules are defining the handling of the flows in the networks on a larger scale or to be more precise the handling of more less specified packets.
A change to the data plane will not directly trigger a change to other planes. Though the flow of packets on the data plane can be observed by the control plane and the control plane can take action depending on the data packets.
### Changes to the Management Plane
...@@ -2,6 +2,171 @@ ...@@ -2,6 +2,171 @@
## Why we do this in go ## Why we do this in go
Because it rocks, but let's see afterwards what can be written here.
## Storing Information
Section XXX (Conceptual Design of a SDN Controller as Network Supervisor)
discusses the need to store information about for element inventories and
topology inventories.
### Element Inventories
Storing information about network elements and their properties is a relative
static process, at least when one considers potential changes over time.
Typically such network elements are added to a network and they will remain in
the network for a longer time, i.e., multiple minutes or even longer.
### Topology Inventory
Every network has one given physical topology (G<sub>physical</sub> ) and on
top of this at least one logical topology (G<sub>logical1</sub>). There may be
multiple logical topologies (G<sub>n+1</sub>) on top logical topologies
(G<sub>n</sub>), i.e., a recursion. Such logical topologies (G<sub>n+1</sub>)
can again have other logical topologies as recursion or other logical topologies
in parallel.
A topology consists out of interfaces, which are attached to their respective
network elements, and links between these interfaces.
Mathematically, such a topology can be described as a directed graph, whereas
the interfaces of the network elements are the nodes and the links are
the edges.
G<sub>physical</sub> ist a superset of G<sub>logical1</sub>.
The topology inventory has to store the particular graph for any topology and
also the connections between the different levels of topologies. For instance,
the G<sub>logical1</sub> is linked to G<sub>physical</sub>. (needs to be clear
if changes in n-1 graph has impact on n graph).
For further study at this point: Which type of database and implementation of
databases should be used to store the different topology graphs and their
pontential dependencies? How should the interface between gosdn and this
database look like?
Here is an attempt to describe the above text in a graphical reprensetation (kinda of...not perfect yet):
```mermaid
graph TB
SubGraph1 --> SubGraph1Flow
subgraph "G_logical1"
SubGraph1Flow(Logical Net)
Node1_l1[Node1_l1] <--> Node2_l1[Node2_l1] <--> Node3_l1[Node3_l1] <--> Node4_l1[Node4_l1] <--> Node5_l1[Node5_l1] <--> Node1_l1[Node1_l1]
end
subgraph "G_physical"
Node1[Node 1] <--> Node2[Node 2] <--> Node3[Node 3]
Node4[Node 4] <--> Node2[Node 2] <--> Node5[Node 5]
Net_physical[Net_physical] --> SubGraph1[Reference to G_logical1]
end
```
### Potential other Inventories
There may be the potential need to store information beyond pure topologies,
actually about network flows, i.e., information about a group of packets
belonging together.
### neo4j
Due to the fact that network topologies, with all their elements and connections,
can be represented well by a graph, the choice of a graph database for persistence was obvious.
After some initial experiments with RedisGraph, neo4j was chosen,
because neo4j allows the use of multiple labels (for nodes as well as edges)
and offers a wider range of plugins.
The current implementation offers the possibility to persist different network elements
and their physical topology. It became clear that within the graph database one has to
move away from the basic idea of different independent graphs (topologies) and rather see
the whole construct as a single huge graph with a multitude of relations.
The following figure shows our first idea of a persistence of network topologies with neo4j.
```mermaid
graph TD
subgraph "representation in Database"
PND[PND 1]
A --> |belongs to| PND
B --> |belongs to| PND
C --> |belongs to| PND
D --> |belongs to| PND
E --> |belongs to| PND
A[Node 1] --> |physical| B[Node 2]
D[Node 4] --> |physical| B
B --> |physical| C[Node 3]
B --> |physical| E[Node 5]
A --> |logical1| B
B --> |logical1| C
C --> |logical1| D
D --> |logical1| E
E --> |logical1| A
end
```
The basic idea is to assign the different network elements to a specific Principal Network Domain (PND).
The different topologies are represented by a neo4j relationship between the network elements that are
stored as neo4j nodes. However, with this current variant it is not possible, as required in
[Topology Inventory](#topology-inventory), to represent topologies that are hierarchically
interdependent, since neo4j does not allow relations to be stored as properties (as described [here](https://neo4j.com/docs/cypher-manual/current/syntax/values/#structural-types)
For the reason mentioned above, a more complex idea for persistence is available for the further development, which hopefully allows us to persist and map network elements, PNDs and topologies with all their hirarchical dependencies.
The following figure tries to visualize this idea. The main difference is, that for the different topologies separate nodes are created, to which so-called links belong. The links themselves form a connection between the respective network elements. A link can have several layer protocols, like OTUCN, ODUCN etc.
```mermaid
graph TD
subgraph "dependencies of topologies"
logical1 -->|related_to| physical
logical5 -->|related_to| physical
logical3 -->|related_to| logical1
end
subgraph "every node belongs to a specific PND"
Node1 -->|belongs_to| PND
Node2 -->|belongs_to| PND
Node3 -->|belongs_to| PND
Node4 -->|belongs_to| PND
Node5 -->|belongs_to| PND
end
subgraph "relationship between nodes (nodes can be linked by 0...n links)"
lp2[link_physical]
lp3[link_physical]
lp4[link_physical]
lp5[link_logical1]
lp2 --> |connects| Node4
lp2 --> |connects| Node2
lp3 --> |connects| Node2
lp3 --> |connects| Node3
lp4 --> |connects| Node2
lp4 --> |connects| Node5
lp5 --> |connects| Node1
lp5 --> |connects| Node2
end
subgraph "links are part of a topology"
lp1[link_physical]
lp1 --> |connects| Node1
lp1 --> |connects| Node2
lp1 --> |part_of| physical
end
subgraph "links can contain 1...n layers"
lp2 --> |contains| ODUH
lp2 --> |contains| OTUCN
lp2 --> |contains| ODUCN
end
```
The above idea is not yet approved and there are still open questions.
- Is there a better solution for the assumption that there are several different physical connections between the same nodes than separate link nodes between them?
- Can topologies run over different PNDs -> membership to different PNDs?
- Where can we benefit from using different layers? (e.g. possible saving of unnecessary relations between nodes)
- ...
## YANG to code ## YANG to code
...@@ -11,7 +176,7 @@ The base of the development of goSDN are YANG modules. The RESTful API used for ...@@ -11,7 +176,7 @@ The base of the development of goSDN are YANG modules. The RESTful API used for
### YANG ### YANG
YANG defines an abstract netwoprk interface. It is the foundation of the RESTCONF protocol. Several code generators exist to generate code stubs from a given definition. YANG defines an abstract network interface. It is the foundation of the RESTCONF protocol. Several code generators exist to generate code stubs from a given definition.
### OpenAPI ### OpenAPI
...@@ -29,18 +194,18 @@ For now we can only use the OpenAPI 2.0 standard. This is because `go-swagger` d ...@@ -29,18 +194,18 @@ For now we can only use the OpenAPI 2.0 standard. This is because `go-swagger` d
## Storing Information ## Storing Information
This section keeps by now some loose thoughts about what information has to be stored how and where. This section keeps by now some loose thoughts about what information has to be stored how and where.
There seem to be two classes of information to be stored in the controller: There seem to be two classes of information to be stored in the controller:
* short-living information, such as, current configured network flows or obtained network configuration out of use case #1 (CoCSN) * short-living information, such as, current configured network flows or obtained network configuration out of use case #1 (CoCSN)
* long-time information, such as, information about principle network domains, elements in such a domain if directly learned from SBI, etc * long-time information, such as, information about principle network domains, elements in such a domain if directly learned from SBI, etc
Long-time information should be persistenly stored in the database and survive reboots of goSDN etc. Short-Living information doesn't have to survive reboots of goSDN Long-time information should be persistenly stored in the database and survive reboots of goSDN etc. Short-Living information doesn't have to survive reboots of goSDN
### Some more details for implementation for the database(s) ### Some more details for implementation for the database(s)
We define the principle network domain (PND) and each piece of information of any PND has to be stored in relation the particular PND. We define the principle network domain (PND) and each piece of information of any PND has to be stored in relation the particular PND.
Specification of a PND: Specification of a PND:
* Human readable name of PND * Human readable name of PND
...@@ -48,4 +213,4 @@ Specification of a PND: ...@@ -48,4 +213,4 @@ Specification of a PND:
* Set of supported Southbound-Interfaces, e.g., RESTCONF, TAPI, OpenFlow etc * Set of supported Southbound-Interfaces, e.g., RESTCONF, TAPI, OpenFlow etc
* Physical Inventory Network Elements, hosts and links, pontentially only the SBI SDN controller * Physical Inventory Network Elements, hosts and links, pontentially only the SBI SDN controller
A PND entry must be explicitly generated, though some information can be automatically be generated, e.g., the physical inventory for use-case #1 (CoCSN) would mean that the information about the SBI domain specific SDN controller is entered. A PND entry must be explicitly generated, though some information can be automatically be generated, e.g., the physical inventory for use-case #1 (CoCSN) would mean that the information about the SBI domain specific SDN controller is entered.
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment