
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (48)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (9665)
-
How to visualize matplotlib animation in Jupyter notebook
23 avril 2020, par anonymous13I am trying to create a racing bar chart similar to the one in the link (https://towardsdatascience.com/bar-chart-race-in-python-with-matplotlib-8e687a5c8a41). 
However I am unable to see the animation in my Jupyter notebook



code



import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import matplotlib.animation as animation
from IPython.display import HTML

df = pd.read_csv('https://gist.githubusercontent.com/johnburnmurdoch/4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations', 
 usecols=['name', 'group', 'year', 'value'])

current_year = 2018
dff = (df[df['year'].eq(current_year)]
 .sort_values(by='value', ascending=True)
 .head(10))

colors = dict(zip(
 ['India', 'Europe', 'Asia', 'Latin America',
 'Middle East', 'North America', 'Africa'],
 ['#adb0ff', '#ffb3ff', '#90d595', '#e48381',
 '#aafbff', '#f7bb5f', '#eafb50']
))
group_lk = df.set_index('name')['group'].to_dict()


fig, ax = plt.subplots(figsize=(15, 8))
def draw_barchart(year):
 dff = df[df['year'].eq(year)].sort_values(by='value', ascending=True).tail(10)
 ax.clear()
 ax.barh(dff['name'], dff['value'], color=[colors[group_lk[x]] for x in dff['name']])
 dx = dff['value'].max() / 200
 for i, (value, name) in enumerate(zip(dff['value'], dff['name'])):
 ax.text(value-dx, i, name, size=14, weight=600, ha='right', va='bottom')
 ax.text(value-dx, i-.25, group_lk[name], size=10, color='#444444', ha='right', va='baseline')
 ax.text(value+dx, i, f'{value:,.0f}', size=14, ha='left', va='center')
 # ... polished styles
 ax.text(1, 0.4, year, transform=ax.transAxes, color='#777777', size=46, ha='right', weight=800)
 ax.text(0, 1.06, 'Population (thousands)', transform=ax.transAxes, size=12, color='#777777')
 ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
 ax.xaxis.set_ticks_position('top')
 ax.tick_params(axis='x', colors='#777777', labelsize=12)
 ax.set_yticks([])
 ax.margins(0, 0.01)
 ax.grid(which='major', axis='x', linestyle='-')
 ax.set_axisbelow(True)
 ax.text(0, 1.12, 'The most populous cities in the world from 1500 to 2018',
 transform=ax.transAxes, size=24, weight=600, ha='left')
 ax.text(1, 0, 'by @pratapvardhan; credit @jburnmurdoch', transform=ax.transAxes, ha='right',
 color='#777777', bbox=dict(facecolor='white', alpha=0.8, edgecolor='white'))
 plt.box(False)

draw_barchart(2018)

import matplotlib.animation as animation
from IPython.display import HTML
fig, ax = plt.subplots(figsize=(15, 8))
animator = animation.FuncAnimation(fig, draw_barchart, frames=range(1968, 2019))
HTML(animator.to_jshtml()) 





Below is what I tried using and the errors



HTML(animator.to_jshtml()) <-- Static output with buttons unable to visualize animation
plt.rcParams["animation.html"] = "jshtml" <- no error and output
HTML(animator.to_html5_video()) <---Requested MovieWriter (ffmpeg) not available 





Note I have FFmpeg installed in my system.
Can you help me with the issue


-
Muxing raw h264 + aac into mp4 file with av_interleaved_write_frame() returning 0 but the video is not playable
3 avril 2020, par Jaw109I have a program [1] that muxing audio and video into mp4 file(in idividual worker thread, retrieving audio/video frame from a streaming daemon). The audio is perfectly played in VLC, but the video is not playable, VLC debug logs show the start-code of video frame is not found.



I have another demuxing program [2] to retrieve all the frame to see what had happened. I found the video frame is modified



00000001 674D0029... was modified into 00000019 674D0029... (framesize is 29)
00000001 68EE3C80... was modified into 00000004 68EE3C80... (framesize is 8)
00000001 65888010... was modified into 0002D56F 65888010... (framesize is 185715)
00000001 619A0101... was modified into 00003E1E 619A0101... (framesize is 15906)
00000001 619A0202... was modified into 00003E3C 619A0202... (framesize is 15936)
00000001 619A0303... was modified into 00003E1E 619A0303... (framesize is 15581)




It seems like the h264 start-code was replaced with something like... frame-size. but why ? Is there anything I did wrongly ? (Any idea ? something flags ? AVPacket initialization ? AVPacket's data copy wrongly ?)



[1] muxing program



int go_on = 1;
std::mutex g_mutex;
AVStream* g_AudioStream = NULL;
AVStream* g_VideoStream = NULL;

int polling_ringbuffer(int stream_type);

int main(int argc, char** argv)
{

 AVFormatContext* pFmtCntx = avformat_alloc_context();
 avio_open(&pFmtCntx->pb, argv[1], AVIO_FLAG_WRITE);
 pFmtCntx->oformat = av_guess_format(NULL, argv[1], NULL);
 g_AudioStream = avformat_new_stream( pFmtCntx, NULL );
 g_VideoStream = avformat_new_stream( pFmtCntx, NULL );
 initAudioStream(g_AudioStream->codecpar);
 initVideoStream(g_VideoStream->codecpar);
 avformat_write_header(pFmtCntx, NULL);

 std::thread audio(polling_ringbuffer, AUDIO_RINGBUFFER);
 std::thread video(polling_ringbuffer, VIDEO_RINGBUFFER);

 audio.join();
 video.join();

 av_write_trailer(pFmtCntx);
 if ( pFmtCntx->oformat && !( pFmtCntx->oformat->flags & AVFMT_NOFILE ) && pFmtCntx->pb )
 avio_close( pFmtCntx->pb );
 avformat_free_context( g_FmtCntx );

 return 0;
}

int polling_ringbuffer(int stream_type)
{
 uint8_t* data = new uint8_t[1024*1024];
 int64_t timestamp = 0;
 int data_len = 0;
 while(go_on)
 {
 const std::lock_guard lock(g_mutex);
 data_len = ReadRingbuffer(stream_type, data, 1024*1024, &timestamp);

 AVPacket pkt = {0};
 av_init_packet(&pkt);
 pkt.data = data;
 pkt.size = data_len;

 static AVRational r = {1,1000};
 switch(stream_type)
 {
 case STREAMTYPE_AUDIO:
 pkt.stream_index = g_AudioStream->index;
 pkt.flags = 0;
 pkt.pts = av_rescale_q(timestamp, r, g_AudioStream->time_base);
 break;
 case STREAMTYPE_VIDEO:
 pkt.stream_index = g_VIDEOStream->index;
 pkt.flags = isKeyFrame(data, data_len)?AV_PKT_FLAG_KEY:0;
 pkt.pts = av_rescale_q(timestamp, r, g_VideoStream->time_base);
 break;
 }
 static int64_t lastPTS = 0;
 pkt.dts = pkt.pts;
 pkt.duration = (lastPTS==0)? 0 : (pkt.pts-lastPTS);
 lastPTS = pkt.pts;

 int ret = av_interleaved_write_frame(g_FmtCntx, &pkt);
 if(0!=ret)
 printf("[%s:%d] av_interleaved_write_frame():%d\n", __FILE__, __LINE__, ret);
 }

 return 0;
}




[2] demuxing program



int main(int argc, char** argv)
{
 AVFormatContext* pFormatCtx = avformat_alloc_context();
 AVPacket pkt;
 av_init_packet(&pkt);
 avformat_open_input(&pFormatCtx, argv[1], NULL, NULL);
 for(;;)
 {
 if (av_read_frame(pFormatCtx, &pkt) >= 0)
 {
 printf("[%d] %s (len:%d)\n", pkt.stream_index, BinToHex(pkt.data, MIN(64, pkt.size)), pkt.size );
 }
 else
 break;
 }

 avformat_close_input(&pFormatCtx);
 return 0;
}




[3] Here are my environment



Linux MY-RASP-4 4.14.98 #1 SMP Mon Jun 24 12:34:42 UTC 2019 armv7l GNU/Linux
ffmpeg version 4.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 8.2.0 (GCC)

libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100



-
Running pulseaudio inside docker container to record system audio
20 mars 2023, par XXLuigiMarioI'm trying to set up a Docker container with Selenium that takes a recording of the browser with system audio using ffmpeg. I've got video working using Xvfb. Unfortunately, on the audio side, it seems to be more tricky.


I thought I would set up a virtual pulseaudio sink inside the container, which would allow me to record its monitor :


pacmd load-module module-null-sink sink_name=loopback
pacmd set-default-sink loopback
ffmpeg -f pulse -i loopback.monitor test.wav



This works on my host operating system, but when trying to start the pulseaudio daemon in a container, it fails with the following message :


E: [pulseaudio] module-console-kit.c: Unable to contact D-Bus system bus: org.freedesktop.DBus.Error.FileNotFound: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory



This would seem to be related to a freedesktop service called dbus. I've tried installing it and starting its daemon, but I couldn't seem to get it to work properly.
I couldn't find much information on how to proceed from here. What am I missing for pulseaudio ? Perhaps there's an easier way to record the system audio inside a container ?


My goal is not to record it from the host operating system, but to play the audio inside the browser and record it all inside the same container.