I'm looking for the ability to have statistical data / performace by Oracle.
I am analyzing two solutions:
A. Go live using driver as:
B. by a demon read in Java that expose the data in JSON
Both have positive and negative sides, the A may have a long latency due to the connection to be restored each time, the B would be an external object to Telegraf.
As always willing to test and development.
From the Telegraf point of view, A would be the way to go
By an Oracle Point of view, having frequent connection/disconnection is a bad practice for the listener.
The reason that move me to try a little demon with java, single connection and light SQL!
there is a project like https://sourceforge.net/projects/ashv/ that can be a start point.
But I think that can be so useful try both solutions.
how can I be most useful?
If you want to try the A solution, you can have a look at what I did on Postgres #792
Moving it to Oracle wouldn't be a big deal
I guess that you are trying to build a dashboard based on ASH ?
First milestone is to have a dashboard with Top Activity graphs on multiple Oracle instance, without detail on sessions and query.
I'm thinking about live monitoring with a way to build dashboard for applications stack (accelerator, application, db).
going with A, why would it have to do frequent connect/disconnect? couldn't the plugin just cache and hold onto a connection?
Sure! I must try how it could work on Go! ;) I'm a junior on it.
Didn't know it was possible.
Is it just by not closing the connection and add a defer close ?
Le 11 mars 2016 12:56, "Cameron Sparr" [email protected] a écrit :
going with A, why would it have to do frequent connect/disconnect?
couldn't the plugin just cache and hold onto a connection?—
Reply to this email directly or view it on GitHub
https://github.com/influxdata/telegraf/issues/831#issuecomment-195337519
.
just making the connection a field on the plugin struct
ok it's a bit out of the subject of the issue, but do you have a link of an example ?
I started writing a universal DB Query plugin. Just to avoid duplication: Has anybody else started working on such a plugin?
It's based on golang database/sql, so Oracle, Postgres, Mysql should be supported (tested: Oracle XE 12.1 via go-oci8). It supports dynamic tagging based on column names, multiple queries.
But it's my first golang project, so the design might not be very advanced (I'm a c++ guy). It's very alpha now, but it works for my case. I'll test larger datasets in the next weeks.
thanks @bbczeuz, you can also ask in the issue for a generic db query plugin: https://github.com/influxdata/telegraf/issues/352
@abarban You might get some inspiration from scollectors oracle collector: https://github.com/bosun-monitor/bosun/blob/master/cmd/scollector/collectors/oracle.go . Haven't used it myself (yet) so I have no idea how applicable it is.
@sparrc what the status of this?
not planned for any upcoming release
Been getting a bunch of interest in it here today at capital one.
I have some running code in my fork (zseng_dev_sqlquery branch), but plan on changing the config architecture to something like the snmp plugin's. There are some non public commits in my internal repo that fix null handling.
One thing I didn't yet find a clean solution: Integrating the Oracle client, so no manual installation is needed or dynamically load some driver for Oracle DB.
As a workaround, you can use the "inputs.prometheus" plugin to work with prometheus "oracle DB exporter"
This is not very convenient (and in fact rather inconvenient), but it's acceptable as a temporary solution.
1) install oracle instant client ( or full client )
2) get exporter from github page
https://prometheus.io/docs/instrumenting/exporters/
https://github.com/iamseth/oracledb_exporter/releases
3) create config for telegraf plugin ( inputs.prometheus )
[[inputs.prometheus]]
urls = ["http://localhost:9161/metrics"]
response_timeout = "3s"
[inputs.prometheus.tags]
db="DBNAME"
4) create wrapper for exporter
cd /opt/prometheus_oracle_exporter
sudo ln -s /usr/lib/oracle/12.2/client64/lib/libclntsh.so.12.1 libclntsh.so.18.1
vim wrapper_oracledb_exporter.sh
there must be a credentials, connection string and LD_LIBRARY_PATH
#!/bin/bash
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
cd "$SCRIPT_DIR"
ORA_CLIENT="/usr/lib/oracle/12.2/client64"
export LD_LIBRARY_PATH="$ORA_CLIENT/lib:$SCRIPT_DIR"
export DATA_SOURCE_NAME='db_login:db_password@ORACLE_CONNECTION_STRING'
./oracledb_exporter
5) start wrapper, restart telegraf
6) unfortunately, almost all performance metrics go to separate influxdb measurements
in some case it may be changed by replace "fieldtoappend" to "labels " in default-metrics.toml, but not all ....
Also, I am not an expert in license compatibility, but it could be possible to borrow the code of this project ( also golang, MIT license ) for the plugin being created
Hi @OlegKorchagin , have you tested this one ? It seems a good starting point to get Oracle Performace metrics into influxdb.
https://github.com/rsolorzano2410/sqlcollector
You can see a config file example here.
https://github.com/rsolorzano2410/sqlcollector/blob/master/src/main/resources/conf/SQLCollector.xml