Streaming Telemetry & Grafana

Intro

Today I’ll give you a quick overview about Streaming Telemetry. After a that overview we’ll take a look, how to install InfluxDB and Grafana to start with streaming telemetry. It’s just a short intro in a very deep function. I think just to control InfluxDB or Grafana you can read tons of stuff.

Streaming Telemetry

To export data or logs from a System, we have different protocols. During the old days we use SNMP and Syslog. Syslog is for logging only. SNMP supports configuration and logging. For logging we can poll the system or send traps. Booth protocols have some problems like scaling and retransmission. With the C9800 Cisco use Streaming Telemetry. It use the gRPC protocol.

gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication. It is also applicable in last mile of distributed computing to connect devices, mobile applications and browsers to backend services.

https://grpc.io/

gRPC Dial-Out function push information via TCP/HTTP2. This is a very fast, effencis and secure way to provide information. The controller can push realtime information in very short intervals, what isn’t possible with SNMP or Syslog. Additional to that, the data is model based (YANG).

YANG Model

The data that is pushed by the C9800 is based on YANG models. YANG is a data modeling language. It define the data structure. With OpenConfig there is a vendor neutral way to model data. Additional to the OpenConfig YANG Models Cisco provide a lot of vendor data models.

Why using big data

Streaming Telemetry is a way to export tons of data. That data need to saved and analyse. Why should we do this?
There are a lot of reasons why to analyse data and also a lot of ways. If you look to the vendors, each vendor have it tools. AirWave, DNA-C, NSight. All of this tools collect data and analyse the data. Each vendor have it own way to work with the data and most vendors show some nice colors. But most vendors don’t provide you information, what you see. Additional, they decide what can be interesting for you.
If you have some knowledge about wireless networks, you maybe want to look on vendor metrics. Maybe vendors don’t show you the data you need. This is a point where you can start with big data. In my case, I use Grafana and InfluxDB as a solution.

Depending on the vendor you can share some or tons of data, in small or large time intervals. With Streaming Telemetry for example you can share data in short second intervals. That puts you in a position to query the signal strengths of all clients in second intervals and write it to a database (InfluxDB). With a visualisation tool (Grafana) you can analyse and visualise that data.

Let me give you an sample. You are running a warehouse and have some trouble with the wifi signal. Now you change your TX settings on some access point. How do you validate, if that solve the problem? You can do a validation survey or maybe talking to the worker. But all that data tells you assumptions. The validation show real data, but is it what you device show? You need to compare it. A validation survey is essential. But during a live system you can compare other metrics. One metrics for example is the client data rate. How was that rate befor and after the change?

What is, if your problem was a firmware bug on the device. You change the driver on 10% or 20% of the devices and request feedback. What if you can monitor you metrics like data rate, roaming time, RSSI on a second interval. Now you can compare the data from last week with the data from this week. Same devices, same worker, other firmware version. That data show you real wifi relevant information and not assumptions from a worker.

  • You are starting the introduction of a new generation of devices, how does they perform compared to the old devices.
  • You do changes to your system, configuration or firmware, how does that effect my infrastructure?
  • You would compare backbone throughput with wifi throughput over time from different vendors?

There are a lot of examples, where big data analyse can be very useful. That tutorial should give you a easy way to start with big data.

During the installation you’ll get a quick intro about all used tools like Grafana and InfluxDB.

Install your Logging System

I use a Ubuntu 18.04 with a static IP as logging System. It’s clean installed and SSH enabled. More isn’t needed.

Pipeline

What is Pipeline?

I’ll describe Pipeline as a middleware between C9800 and InfluxDB. It receive gRPC data from the C9800 and convert/write it into the InfluxDB.

Install Pipeline

Pipeline is available via GIT hub and can easy be cloned:

git clone https://github.com/cisco/bigmuddy-network-telemetr-pipeline.git

Change dir to the cloned data and create a new config file:

cd bigmuddy-network-telemetry-pipeline/
nano C9800.conf

I use the following config as sample:

[default]
id = pipeline

[gRPCDialout]
stage = xport_input
type = grpc
encap = gpb
listen = :58000
tls = false
logdata = on

[inspector]
stage = xport_output
type = tap
file = dump_script.json
encoding = json_events
datachanneldepth = 1000
countonly = false

[metrics_influx]
stage = xport_output
type = metrics														#file type for parsing
file = /home/timo/bigmuddy-network-telemetry-pipeline/metrics.json	#config path
datachanneldepth = 10000												#optionally, specify a buffer for the data
output = influx														#InfluxDB output
influx = http://<Server IP>:8086					#InfluxDB url
database = mdt_db							#InfluxDB database
dump = metricsdump.txt							#local InfluxDB dump file (remove after testing)
workers = 15

A description of the different commands can be found in the default configuration, pipeline.conf

Pipeline include a default Metrics file, that I move to create my own metrics.json. This file describe, how pipeline modified and forward the data into InfluxDB.

mv metrics.json metrics.json.backup
touch metrics.json

We’ll add the content for the metrics file later. For a first check we can now start Pipeline in Debug:

./bin/pipeline -config=C9800.conf -log= - -debug

At the end you should see that it starts, but get two error messages, as we don’t setup our metrics.json.

With CTRL + C you can terminate the pipeline for now.

InfluxDB

What is InfluxDB?

InfluxDB is a open source time series database. It is optimized for large time series data. That is what we need to monitor a hugh amount of data over time.

Install InfluxDB

The following commands are needed to install InfluxDB:

curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
source /etc/lsb-release
echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
sudo apt-get update
sudo apt-get install influxdb

To start InfluxDB and check the status use the following commands:

sudo service influxdb start
sudo service influxdb status

Now we need to connect to the local DB and create the database.

influx
precision rfc3339
CREATE DATABASE mdt_db
SHOW DATABASES
exit

Grafana

What is Grafana?

Grafana access the data from the InfluxDB and present it. You can easy create from simple till complex dashboards. Additional there is an easy way to add an alarm with push notification per Dashboard/Panel.

The push notification can send for example via the good old E-Mail or on a modern way like a Slack message including a screenshot of the last data.

Install Grafana

The following commands are needed to install Grafana:

curl https://packages.grafana.com/gpg.key | sudo apt-key add -
sudo add-apt-repository "deb https://packages.grafana.com/oss/deb stable main"
sudo apt-get update
sudo apt-get install grafana

To start Grafana and check the status use the following commands:

sudo systemctl start grafana-server
sudo systemctl status grafana-server

Enabling autostart with this command:

sudo systemctl enable grafana-server

Access Grafana

Grafana can accessed via http://<IP-address>:3000/

The default login and password is:

User: admin
Pass: admin

After the first login you need to change the password.

Configure your Logging System

We have now all components up and running. Next step is to configure the C9800 to send streaming telemetry and Pipeline to store the needed data into InfluxDB. Final we create a Dashboard inside Grafana to view the data.

Get XPath

To push informations from C9800 to Pipeline/InfluxDB we need XPath’s. We can get the XPath from the YANG model. Cisco provide the YANG Explorer that shows very easy the XPath per parameter that we require.

For a first example, we’ll push informations about traffic status for wireless clients.

How to find the XPath? I search for „clients“ and found the entry:

Cisco-IOS-XE-wireless-client-oper -> client-oper-data -> traffic-stats

That sounds for me interesting. There are a lot of options and data you can export. For me it’s mostly start clicking or searching, what can be interesting.

I select the XPath, it include all elements like „ms-mac-address“ or „bytes-rx“ or „bytes-tx“…

Configuration C9800

Per XPath we need to add a subscription to our C9800. That configuration is self-explanatory.

telemetry ietf subscription 300
    encoding encode-kvgpb
    filter xpath /wireless-client-oper:client-oper-data/traffic-stats
    source-address 10.10.10.10
    stream yang-push
    update-policy periodic 500
    receiver ip address 10.10.10.18 58000 protocol grpc-tcp

Configure Pipeline

After pushing information from C9800 to Pipeline, we need to inform Pipeline, what information needed to translate into InfluxDB.

nano /home/timo/bigmuddy-network-telemetry-pipeline/metrics.json

We get the basepath with YANG Explorer. The fields define, what information we write into the database. All other data we receive from the controller will be dropped.

[
    {
                "basepath" : "Cisco-IOS-XE-wireless-client-oper:client-oper-data/traffic-stats",
                "spec" : {
                        "fields" : [
                                {"name":"name", "tag": true},
                                {"name":"pkts-rx"},
                                {"name":"pkts-tx"}
                        ]
                }
        }
]

Start Pipeline

If we now start Pipeline and load the our metrics.json via our configuration, you should see the gRPC Session:

./bin/pipeline -config=C9800.conf -log= - -debug

During the start it ask for a user and password, it’s admin/admin for the InfluxDB and not your Grafana password!

Open a second SSH session to your server to check if Pipeline receive data and what data it receive. You can check that with the follwoing comand:

cd bigmuddy-network-telemetry-pipeline/
tail -f dump_script.json

Configure Grafana

We have now the base configured. C9800 pushed data to Pipeline. Pipeline write data to InfluxDB. Now it’s time to configure Grafana to access and present data from the InfluxDB.

Add datasource to Grafana

During our first login we change the password for the admin user. Please login now with the new password to add a datasoure. The datasource is our InfluxDB, please add it.

You can get the details, what and how to configure from the following screenshots. It’s straight ahead.

Create our first Grafana dashboard

The Grafana dashboard is used during our daily business. It should include all information we need on the first view. The dashboard include panels of different size. As we include for now only Client RX/TX pakets, we can create on Panel to show booth values or one panel for RX and on for TX.

As it’s our first panel, we’ll create one, that include booth values. If you use a demo C9800, it’s important that we need to connect a client first and generate some traffic.

You can select all information per Drop Down. If you drop down show no information, the database is empty. Generate some data and check if pipeline is working fine.

We add a second query to show RX and TX. During your changes you can already see the data at the panel on top. With ESC or the arrow on top left you can go back.

Now we have our first panel. You can change for example the time range on the top right.

To change the panel name you can click on the drop down arrow or just hit „e“. Under the general icon we can change the panel name and go back.

For a detail view, we create a second panel. That second panel show the Client RX, but per client and not in total. You can add as much views you need. Just make sure, that the C9800 send the data and pipeline write it to InfluxDB.

Summary

With streaming telemtery and Grafana we can create customer dashboards that show what we need in almost real time. It’s a great solution to monitor and analyse what’s important from a customer view and not a vendor view. If you looking into the YANG data models you can find a lot of useful information.

You can log basic information like throughput or client count. But you can also log information like client RSSI and SNR. If you now collect that information from the client side, you are able to compare and validate information.

I’m personally like to collect some KPI’s from my network before and after changes or updates. This makes it possible to validate your changes. Grafana is a great tool to document and validate that KPI’s.

Grafana and InfluxDB are tools that provide a massive amount of functions. It’s not limited to some of the easy stuff I show for the start. It’s also easy to implement other vendors and devices. Even if you device only support maybe Syslog or SNMP.

Additional to only monitor you can create alarm rules and forward events to your system. I use that with a Mail and Slack integration. It’s pretty easy and fast configured.

C9800-CL for my homlab

Introduction

After the release of the new Catalyst 9800 Controller, I see that the virtuell version support the same features as the hardware. In order to perform simple tests outside my lab in the future, I wanted to run the controller virtually on my MacOS. Since there are 1-2 small stumbling blocks here, I created this blog post.

Prerequisites

I tested the installation under VMWare Fusion and VirtualBox on my Mac. VMWare Fusion runs stable with all options. Under VirtualBox I first had problems with the network card. But now it also runs.

For the installation I use the ISO and not the OVA.

Network Structure

My network uses two adapters. One has access to my LAN adapter for communication with the access point. The other one gets access via my WiFi to „external“.

If I am outside my homelab, I use the first adapter in host-only mode. This gives me access to the interface even without an active LAN card. Accordingly, I can use the interface and my scripts without access points at any time.

Installation VMWare Fusion

Create VM

For the installation under VMWare Fusion we first create a new VM with the following parameters:

  • OS: Other Linux 4.x or later kernel 64-Bit
  • Legacy BIOS
  • HDD 8 GB (default value)

VM Customize

Before the first start we have to adjust following settings for our VM:

  • Extend RAM to 4096MB
  • Network adapter
    • Adapt the existing adapter to the desired network
    • Add another adapter and adapt to the desired network
    • Adapting the driver of the LAN card via the CLI

Installation VirtualBox

Create VM

For the installation under VirtualBox we first create a new VM with the following parameters:

  • OS: Other Linux 64 Bit
  • RAM: 4096MB
  • HDD 8 GB (default value)

VM Customize

Before the first start we have to adjust following settings for our VM:

Optimize system

Change the paravirtualization to KVM.

Optimize network

  1. First NIC, driver/adapter type to „paravirtualized network (virtio-net)“
  2. Activate another NIC, adapt driver/adapter type according to the first NIC.
  3. „Connected to“ must be configured according to personal preferences. It is important that both interfaces are NOT in the same broadcast domain.
  4. The Promiscous mode should be activated for both cards.

Installation C9800-CL

After we have configured the VM, we can start it and boot with the ISO image. Then we go through the installation according to Cisco documentation.

After several installations I noticed that the first dialogue „Press any key to continue.“ sometimes took several minutes.

Finally, it is important that the installation displays 1 Virtual and 2 Gigabit Ethernet interfaces. This means that the drivers are correctly detected and the cards are active. I personally use the wizard-free configuration path, but this is up to everyone and is currently not part of the blog post.

C9800 Installation Done

Do you know PLINK?

The last week I need to run a some commands on a lot of APs and output. Today mostly all customer use controllers and don’t need to connect to each ap. But as I learned last week there are still some commands, you can’t run through the controller CLI on all APs :/

For one of my open cases with a vendor I need to run a command on 100+ APs. It was not possible to perform this action from the controller CLI. I need to connect to each AP via ssh, login, run the command and copy the output… Or I use Plink.exe with a simple batch file like in the older days. Plink is a application from PuTTY.

For this job I use 3 files and the Plink application. Let me describe the single files:

command.txt

This file include the commands we need to run. It’s important to include a command that terminate the session and a carriage return at the end to execute the last command. For my small test, I just use „show version“ and „exit“ to terminate the session.

show version
exit

 

host.txt

For the host’s I use a file that also include username and password. It’s also possible create just a list of all host’s and static place the username and password in the batch script. All entries are separated with a , to exclude a line I use the ;

;HOST,USER,PASS
10.10.20.100,admin,12345678
10.10.20.101,admin,12345678

 

run.bat

The last part is the batch script to run the commands on all devices and document the output.

@echo off
echo Start Script > log.txt
for /f „eol=; tokens=1,2,3 delims=,?“ %%a in (host.txt) do (
echo ————————— >> log.txt
echo Host: %%a >> log.txt
echo User: %%b >> log.txt
echo %TIME% >> log.txt
echo ————————— >> log.txt
plink.exe -ssh -l %%b -pw %%c %%a < command.txt >> log.txt
)

 

I’ll now describe the batch file line by line for a easy understanding and quick changes:

echo Start Script > log.txt
Write the first line into log.txt, the > overwrite the last log.txt file

for /f „eol=; tokens=1,2,3 delims=,?“ %%a in (host.txt) do (
this starts the loop and run the following commands for each line added in host.txt. The eol=; is used to mark the ; as symbol to exclude a line, tokens are used for the three options: host,user,pass.

echo ————————— >> log.txt
add a line of – to the log.txt, >> is used to add the information to the log.txt

echo Host: %%a >> log.txt
add the address of the host %%a means first information from the line

echo User: %%b >> log.txt
add the user of the host, %%b used for second information from the line

echo %TIME% >> log.txt
add the time to the log.txt

echo ————————— >> log.txt
add again a line of – to the log.txt

plink.exe -ssh -l %%b -pw %%c %%a < command.txt >> log.txt
-ssh
use ssh for the connection
-l %%b
add the user, %%b is the second option from the line
–pw %%c
add the password, %%c is the third option from the line
%%a
add the host, %%a is the second option from the line
< command.txt
include the commands for the ssh connection
>> log.txt
add the output include commands to the log.txt

)
End the loop.

The result

The result looks like the following:

Start Script
—————————
Host: 10.10.20.100
User: admin
22:27:58,17
—————————
show version
exit
vx9000-01*>show version
VX9000 version 5.8.4.0-034R
Copyright (c) 2004-2016 Symbol Technologies, Inc. All rights reserved.
Booted from secondary because of fallback!
vx9000-01 uptime is 0 days, 00 hours 29 minutes
CPU is Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz
Base ethernet MAC address is *****
System serial number is *****
Model number is VX-9000-DEMO-WR
vx9000-01*>
vx9000-01*>exit
Using username „admin“.
Start Script
—————————
Host: 10.10.20.101
User: admin
22:27:58,17
—————————
show version
exit
vx9000-02*>show version
VX9000 version 5.8.4.0-034R
Copyright (c) 2004-2016 Symbol Technologies, Inc. All rights reserved.
Booted from secondary because of fallback!
vx9000-02 uptime is 0 days, 00 hours 25 minutes
CPU is Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz
Base ethernet MAC address is *****
System serial number is *****
Model number is VX-9000-DEMO-WR
vx9000-02*>
vx9000-02*>exit
Using username „admin“.

 

You can find more information in the PuTTY manual.

Ekahau Customer Report Template – Loops & APs

Visualization SN SNR Final

Let’s start with the second part of my Ekahau Customer Report Template tutorial.

We know already from the first post, how to create heat maps and show the requirements. As you maybe already notice, the heat map is generated for one floor. If you have more than one floor, you need a loop or just the first floor is shown.

You can generate loops of different kinds. As first loop we’ll use the “floors” loop.

Here is our last code:

Signal strength: <#${req-value-sig-strength}#> dBm
SNR: <#${req-value-snr}#> dB
5 GHz signal strength for my access points
<#”visualization”:{“heatmap”:{“type”: “sig-strength”,“filter”: {“include”: {“owner”: “my”, “band”: “5”}}},“aps”:{“show-name”: “true”,“filter”: {“include”: {“owner”: “my”, “band”: “5”}}}}#>
<#”visualization-legend”: {}#>
5 GHz SNR for all access points
<#”visualization”:{“heatmap”:{“type”:“snr”,“filter”:{“include”:{“owner”:“my”,“band”:“5”}}},“aps”:{“show-name”:“true”,“filter”:{“include”:{“owner”:“my”,“band”:“5”}}}}#>
<#”visualization-legend”:{}#>

The result you can see in this PDF file, v1.pdf.

Create a loop

This works fine with a one floor project. To use it with multiple floors we need a loop and show the floor name:

Signal strength: <#${req-value-sig-strength}#>
dBm SNR: <#${req-value-snr}#> dB
<#“loop-start”: {“type”: “floors”}#>
Floor: <#${floor-name}#>
5 GHz signal strength for my access points
<#”visualization”:{“heatmap”:{“type”: “sig-strength”,“filter”: {“include”: {“owner”: “my”, “band”: “5”}}},“aps”:{“show-name”: “true”,“filter”: {“include”: {“owner”: “my”, “band”: “5”}}}}#>
<#”visualization-legend”: {}#>
5 GHz SNR for all access points
<#”visualization”:{“heatmap”:{“type”:“snr”,“filter”:{“include”:{“owner”:“my”,“band”:“5”}}},“aps”:{“show-name”:“true”,“filter”:{“include”:{“owner”:“my”,“band”:“5”}}}}#>
<#”visualization-legend”:{}#>
<#“loop-end”: {“type”: “floors”}#>

This loop will walk throw all floors and generate all heat maps on a per floor base. The result looks like:

Signal strength: XX dBm
SNR: XX dB
Floor: Floor 1
5 GHz signal strength for my access points
HEAT MAP Signal Strength 5GHz, my aps
5 GHz SNR for all access points
HEATMAP SNR 5GHz, all aps
Floor: Floor 2
5 GHz signal strength for my access points
HEATMAP Signal Strength 5GHz, my aps
5 GHz SNR for all access points
HEATMAP SNR 5GHz, all aps

I add the PDF file, Loop-v2.pdf.

You can also create two loops, if you’ll first show the signal strength of each floor and then show the SNR of each floor.

Signal strength: <#${req-value-sig-strength}#> dBm
SNR: <#${req-value-snr}#> dB
<#“loop-start”: {“type”: “floors”}#>
<#${floor-name}#> – 5 GHz signal strength for my access points
<#”visualization”:{“heatmap”:{“type”: “sig-strength”,“filter”: {“include”: {“owner”: “my”, “band”: “5”}}},“aps”:{“show-name”: “true”,“filter”: {“include”: {“owner”: “my”, “band”: “5”}}}}#>
<#”visualization-legend”: {}#>
<#“loop-end”: {“type”: “floors”}#>
<#“loop-start”: {“type”: “floors”}#>
<#${floor-name}#> – 5 GHz SNR for all access points
<#”visualization”:{“heatmap”:{“type”:“snr”,“filter”:{“include”:{“owner”:“my”,“band”:“5”}}},“aps”:{“show-name”:“true”,“filter”:{“include”:{“owner”:“my”,“band”:“5”}}}}#>
<#”visualization-legend”:{}#>
<#“loop-end”: {“type”: “floors”}#>

This loop will walk throw the heat map type and generate it per floor. The result looks like:

Signal strength: XX dBm
SNR: XX dB
Floor 1 – 5 GHz signal strength for my access points
HEATMAP Signal Strength 5GHz, my aps
Floor 2 – 5 GHz signal strength for my access points
HEATMAP Signal Strength 5GHz, my aps

Floor 1 – 5 GHz SNR for all access points
HEATMAP SNR 5GHz, all aps
Floor 2 – 5 GHz SNR for all access points
HEATMAP SNR 5GHz, all aps

I add the PDF file, Loop-v3.pdf.

Show AP information

Now we can create heat maps of different types for different floors. But it’s usually needed to create a list of my access points. For this, we’ll work again with a loop. But now the type is “aps”.

Do show you the easy way, I don’t use the created template with the heat maps. We’ll start with a new file.

<#“loop-start”: {”type”: ”aps”}#>
<#${ap-name}#>
<#“loop-end”: {“type”: “aps”}#>

This show you the names of all access points in your file.

v4

We can now filter to show just our own access points.

<#“loop-start”: {”type”: ”aps”,
“filter”: {
“include”:{
“owner”:”my”
}}
}#>
<#${ap-name}#>
<#“loop-end”: {“type”: “aps”}#>

Take a look to the second result, the „Not my AP“ is missing:

v5

The access point information’s are Data Tags. You can see the different values in the User Guide. You can also create a template for site survey and a second on for simulation. In the simulation template you can show the „tx power“ and „antenna height“ from the simulation.

All in one report

In the last step we’ll now merge all this information to one report. I’ll use the sample where I show all information per floor. We’ll now also include the AP loop inside of the floor loop. This will show the APs on a per floor base.

Signal strength: <#${req-value-sig-strength}#> dBm
SNR: <#${req-value-snr}#> dB
<#“loop-start”: {“type”: “floors”}#>
Floor: <#${floor-name}#>
Access Points Name:
<#“loop-start”: {”type”: ”aps”,“filter”: {“include”:{“owner”:”my”}}}#>
<#${ap-name}#>
<#“loop-end”: {“type”: “aps”}#>
5 GHz signal strength for my access points
<#”visualization”:{“heatmap”:{“type”: “sig-strength”,“filter”: {“include”: {“owner”: “my”, “band”: “5”}}},“aps”:{“show-name”: “true”,“filter”: {“include”: {“owner”: “my”, “band”: “5”}}}}#>
<#”visualization-legend”: {}#>
5 GHz SNR for all access points
<#”visualization”:{“heatmap”:{“type”:“snr”,“filter”:{“include”:{“owner”:“my”,“band”:“5”}}},“aps”:{“show-name”:“true”,“filter”:{“include”:{“owner”:“my”,“band”:“5”}}}}#>
<#”visualization-legend”:{}#>
<#“loop-end”: {“type”: “floors”}#>

The final report you can see as PDF file, FINAL.pdf.

Summary – Ekahau Customer Report Template

I show you some functions of the Ekahau Customer Report Template. But there is a lot more you can do with. The best of all, you can combined almost every loop, filter and information. As I show you, we use the same loop for the AP name. But it show us different information, because we use it in the final version inside a floor loop.

And now some tips from me:

  1. Read the user manual, it shows you all information
  2. Begin with small templates (the only bad part of this report function, debugging is not so easy)
  3. Try different report templates

Also read the different details:

Loops
Data Tags
Visualization Tags

If you are looking for more information or have different questions, feel free to enter it as comment!

Follow me on twitter: dot11_de

Ekahau Customer Report Template – Let’s start

Visualization SN SNR Final

Hello,

I know that a lot of companies use Ekahau as preferred tool for Site Surveys, Simulation and Validation. The last years I do a lot of troubleshooting. During this I check the original Site Survey documentation and realize how many companies use the default template!

I also see people doing a one day site survey and need 1-2 days for documentation. For 3-5 days they need 2-3 days documentation, copy & paste images from the default report to the customize report…

This is the reason why I choose the customer report feature from Ekahau for my first blog post. I hope it’ll help some people to stop wasting time!

Weiterlesen