In order to support streaming data it would be convenient to have an IPython widget that gave users the ability to add more rows of data asynchronously.
This arose in conversation with @ellisonbg and @arvind, both of whom I suspect would have more information on how this could be achieved.
Yes! This shouldn't be too much work, but we will need to figure out the
build issues for vega lite 2.0 first. Our basic idea is that this would
have two sync'd data attributes - the spec and the data, along with a
method to add new data, remove old rows, etc.. This is the granularity of
the vega-embed API so it doesn't make sense to do something more fine
grained at this point.
On Sat, Nov 11, 2017 at 7:25 AM, Matthew Rocklin notifications@github.com
wrote:
In order to support streaming data it would be convenient to have an
IPython widget that gave users the ability to add more rows of data
asynchronously.This arose in conversation with @ellisonbg https://github.com/ellisonbg
and @arvind https://github.com/arvind, both of whom I suspect would
have more information on how this could be achieved.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/altair-viz/altair/issues/435, or mute the thread
https://github.com/notifications/unsubscribe-auth/AABr0MXP99fQrMN4lV9on4f6AXzMaCDFks5s1bxogaJpZM4QahkH
.
--
Brian E. Granger
Associate Professor of Physics and Data Science
Cal Poly State University, San Luis Obispo
@ellisonbg on Twitter and GitHub
[email protected] and [email protected]
I'm planning an upcoming conference talk (PyData NYC). My guess is that the odds of this being complete one week from now is fairly slim, yes? (this is fine, just need to plan alternate demos).
Yes, not likely ;-)
On Mon, Nov 20, 2017 at 5:51 PM, Matthew Rocklin notifications@github.com
wrote:
I'm planning an upcoming conference talk (PyData NYC). My guess is that
the odds of this being complete one week from now is fairly slim, yes?
(this is fine, just need to plan alternate demos).—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/altair-viz/altair/issues/435#issuecomment-345880347,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AABr0AeBRvXZeuvBNQv3bDdRltryHG8Jks5s4h6dgaJpZM4QahkH
.
--
Brian E. Granger
Associate Professor of Physics and Data Science
Cal Poly State University, San Luis Obispo
@ellisonbg on Twitter and GitHub
[email protected] and [email protected]
Any updates on the ability to Chart a growing dataframe? Or alternative approaches? Thanks!
Not a IPython widget, but I came up with a way to stream data to vega plots through flask and flask-socketio.
I had to modify the altair javascript entry point to insert a callback to update the data.
From there I could use socketio to send event to update the plot from python in a Thread.
I was able to package it in a simple python function.
Leaving this, hoping it could help
var socket = io();
// Update the vega chart data with new incoming data from the server
appendStreamData = function (id) {
return function(chart) {
// Register the event handler
socket.on('stream_data_' + id, function(data){
var values = data['new_values'];
var name = data['name']
var changeSet = vega
.changeset()
.insert(values);
// log('receiving ' + name + ' ' + values);
chart.view.change(name, changeSet).run();
});
log('stream handler attached');
}
}
displayVega = function(vegaEmbed, elementId, spec) {
var embedOpt = {
"mode": "vega-lite"
};
function showError(el, error) {
el.innerHTML = ('<div class="error" style="color:red;">' +
'<p>JavaScript Error: ' + error.message + '</p>' +
"<p>This usually means there's a typo in your chart specification. " +
"See the javascript console for the full traceback.</p>" +
'</div>');
throw error;
}
const el = document.getElementById(elementId);
vegaEmbed("#" + elementId, spec)
.then(appendStreamData(elementId))
.catch(error => showError(el, error));
};
displayVega(vegaEmbed, myid, spec)
Most helpful comment
Any updates on the ability to Chart a growing dataframe? Or alternative approaches? Thanks!