
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (37)
-
Configuration spécifique d’Apache
4 février 2011, parModules spécifiques
Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
Création d’un (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (3130)
-
How to set background transparency for animation with ffmpeg
23 octobre 2023, par Jan TurowskiI am creating animated physics graphs with a transparent background for later use in a NLE. On my old machine at work they display and render with background transparency just fine. The exact same code however loses background transparency in the ffmpeg render on both my Linux and my Windows machine at home. The animations are displayed just fine on all machines.


As I first thought it was a Linux issue, I tried to run the code on my Windows machine expecting it to work again. Unfortunately it did not.


Reduced code :


import numpy as np
import matplotlib.pylab as plt
from matplotlib.animation import FuncAnimation
import matplotlib.animation as animation
from matplotlib.pyplot import figure
from matplotlib import style
import locale
from matplotlib.ticker import (MultipleLocator, AutoMinorLocator)
# Set to German locale to get comma decimal separater
locale.setlocale(locale.LC_NUMERIC, "de_DE")
# Tell matplotlib to use the locale we set above
plt.rcParams['axes.formatter.use_locale'] = True

# plt.clf()
# plt.rcdefaults()

# Style und Font definieren

style.use('dark_background')

# Pfeile erstellen
def arrowed_spines(fig, ax):

 xmin, xmax = ax.get_xlim()
 ymin, ymax = ax.get_ylim()

 # removing the default axis on all sides:
 for side in ['bottom','right','top','left']:
 ax.spines[side].set_visible(False)

 # removing the axis ticks
 # plt.xticks([]) # labels
 # plt.yticks([])
 # ax.xaxis.set_ticks_position('none') # tick markers
 # ax.yaxis.set_ticks_position('none')

 # get width and height of axes object to compute
 # matching arrowhead length and width
 dps = fig.dpi_scale_trans.inverted()
 bbox = ax.get_window_extent().transformed(dps)
 width, height = bbox.width, bbox.height

 # manual arrowhead width and length
 hw = 1./20.*(ymax-ymin)
 hl = 1./20.*(xmax-xmin)
 lw = 1. # axis line width
 ohg = 0.3 # arrow overhang

 # compute matching arrowhead length and width
 yhw = hw/(ymax-ymin)*(xmax-xmin)* height/width
 yhl = hl/(xmax-xmin)*(ymax-ymin)* width/height

 # draw x and y axis
 ax.arrow(xmin, 0, xmax-xmin, 0., fc='w', ec='w', lw = lw,
 head_width=hw, head_length=hl, overhang = ohg,
 length_includes_head= True, clip_on = False)

 ax.arrow(0, ymin, 0., ymax-ymin, fc='w', ec='w', lw = lw,
 head_width=yhw, head_length=yhl, overhang = ohg,
 length_includes_head= True, clip_on = False)

# Meine easing-Funktion
def ease(n):
 if n < 0.0:
 return 0
 elif n > 1.0:
 return 1
 else:
 return 3*n**2-2*n**3

# Meine Floor/Warte Funktion
def wait(n):
 if n < 0.0:
 return 0
 else:
 return n

# Canvas erstellen
fig = plt.figure()
ax = fig.add_subplot(111)
fig.set_size_inches([8,9])

def f(x):
 return -0.05*x**2+125
xlin = np.linspace(0,60,100)


# Beschriftung und Optik

plt.xlabel(r"$x$ in $\rm{m}$", horizontalalignment='right', x=1.0)
plt.ylabel(r"$y$ in $\rm{m}$", horizontalalignment='right', y=1.0)
ax.set_xlim(0,100)
ax.set_ylim(0,139)
plt.grid(alpha=.4)
plt.xticks(np.arange(0, 100, 20))
plt.yticks(np.arange(0, 140, 20))
ax.yaxis.set_minor_locator(MultipleLocator(10))
ax.xaxis.set_minor_locator(MultipleLocator(10))
ax.tick_params(axis='x', direction = "inout", length= 10.0, which='both', width=3)
ax.tick_params(axis='y', direction = "inout", length= 10.0, which='both', width=3)


xsub = np.array([0])

# statische Linien definieren
line2, = ax.plot(xsub,f(xsub),linewidth=5,zorder=0,c = 'b')
arrowed_spines(fig, ax)
plt.tight_layout()

# Linien animieren
def animate(i):

 xsub = xlin[0:wait(i-20)]
 global line2
 line2.remove()
 line2, = ax.plot(xsub, f(xsub), linewidth=5, zorder=0,c = "b")
 plt.tight_layout()

animation = FuncAnimation(fig, animate, np.arange(0, 130, 1), interval=100)

plt.show()

# animation.save(r"YOUR\PATH\HERE\reduced_x-y.mov", codec="png",
 dpi=100, bitrate=-1,
 savefig_kwargs={'transparent': True, 'facecolor': 'none'})




-
Setting up audio queue for ffmpeg rtsp stream playing
20 novembre 2011, par illewI'm working on an rtsp streaming(AAC format) client for iOS using ffmpeg. Right now I can only say my app is workable, but the streaming sound is very noisy and even a little distorted, far worse than when it's played by vlc or mplayer.
The stream is read by av_read_frame(), decoded by avcodec_decode_audio3(). Then I just send the decoded raw audio to Audio Queue.
When decoding a local aac file with my app, the sound seemed not so noisy at all. I know initial encoding would dramatically affect the result. However at least I should try to have it sounded like other streaming clients...
Many parts in my implementation/modification actually came from try and error. I believe I'm doing something wrong in setting up Audio Queue, and the callback function for filling Audio Buffer.
Any hints, suggestions or help are greatly appreciated.
// —info of test materials dumped by av_dump_format() —
Metadata:
title : /demo/test.3gp
Duration: 00:00:30.11, start: 0.000000, bitrate: N/A
Stream #0:0: Audio: aac, 32000 Hz, stereo, s16
aac Advanced Audio Coding// — the Audio Queue setup procedure —
- (void) startPlayback
{
OSStatus err = 0;
if(playState.playing) return;
playState.started = false;
if(!playState.queue)
{
UInt32 bufferSize;
playState.format.mSampleRate = _av->audio.sample_rate;
playState.format.mFormatID = kAudioFormatLinearPCM;
playState.format.mFormatFlags = kAudioFormatFlagsCanonical;
playState.format.mChannelsPerFrame = _av->audio.channels_per_frame;
playState.format.mBytesPerPacket = sizeof(AudioSampleType) *_av->audio.channels_per_frame;
playState.format.mBytesPerFrame = sizeof(AudioSampleType) *_av->audio.channels_per_frame;
playState.format.mBitsPerChannel = 8 * sizeof(AudioSampleType);
playState.format.mFramesPerPacket = 1;
playState.format.mReserved = 0;
pauseStart = 0;
DeriveBufferSize(playState.format,playState.format.mBytesPerPacket,BUFFER_DURATION,&bufferSize,&numPacketsToRead);
err= AudioQueueNewOutput(&playState.format, aqCallback, &playState, NULL, kCFRunLoopCommonModes, 0, &playState.queue);
if(err != 0)
{
printf("AQHandler.m startPlayback: Error creating new AudioQueue: %d \n", (int)err);
}
for(int i = 0 ; i < NUM_BUFFERS ; i ++)
{
err = AudioQueueAllocateBufferWithPacketDescriptions(playState.queue, bufferSize, numPacketsToRead , &playState.buffers[i]);
if(err != 0)
printf("AQHandler.m startPlayback: Error allocating buffer %d", i);
fillAudioBuffer(&playState,playState.queue, playState.buffers[i]);
}
}
startTime = mu_currentTimeInMicros();
err=AudioQueueStart(playState.queue, NULL);
if(err)
{
char sErr[4];
printf("AQHandler.m startPlayback: Could not start queue %ld %s.", err, FormatError(sErr,err));
playState.playing = NO;
}
else
{
AudioSessionSetActive(true);
playState.playing = YES;
}
}// — callback for filling audio buffer —
static int ct = 0;
static void fillAudioBuffer(void *info,AudioQueueRef queue, AudioQueueBufferRef buffer)
{
int lengthCopied = INT32_MAX;
int dts= 0;
int isDone = 0;
buffer->mAudioDataByteSize = 0;
buffer->mPacketDescriptionCount = 0;
OSStatus err = 0;
AudioTimeStamp bufferStartTime;
AudioQueueGetCurrentTime(queue, NULL, &bufferStartTime, NULL);
PlayState *ps = (PlayState *)info;
if (!ps->started)
ps->started = true;
while(buffer->mPacketDescriptionCount < numPacketsToRead && lengthCopied > 0)
{
lengthCopied = getNextAudio(_av,
buffer->mAudioDataBytesCapacity-buffer->mAudioDataByteSize,
(uint8_t*)buffer->mAudioData+buffeg->mAudioDataByteSize,
&dts,&isDone);
ct+= lengthCopied;
if(lengthCopied < 0 || isDone)
{
printf("nothing to read....\n\n");
PlayState *ps = (PlayState *)info;
ps->finished = true;
ps->started = false;
break;
}
if(aqStartDts < 0) aqStartDts = dts;
if(buffer->mPacketDescriptionCount ==0)
{
bufferStartTime.mFlags = kAudioTimeStampSampleTimeValid;
bufferStartTime.mSampleTime = (Float64)(dts-aqStartDts);//* _av->audio.frame_size;
if (bufferStartTime.mSampleTime <0 )
bufferStartTime.mSampleTime = 0;
printf("AQHandler.m fillAudioBuffer: DTS for %x: %lf time base: %lf StartDTS: %d\n",
(unsigned int)buffer,
bufferStartTime.mSampleTime,
_av->audio.time_base,
aqStartDts);
}
buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mStartOffset = buffer->mAudioDataByteSize;
buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mDataByteSize = lengthCopied;
buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mVariableFramesInPacket = 0;
buffer->mPacketDescriptionCount++;
buffer->mAudioDataByteSize += lengthCopied;
}
int audioBufferCount, audioBufferTotal, videoBufferCount, videoBufferTotal;
bufferCheck(_av,&videoBufferCount, &videoBufferTotal, &audioBufferCount, &audioBufferTotal);
if(buffer->mAudioDataByteSize)
{
err = AudioQueueEnqueueBufferWithParameters(queue, buffer, 0, NULL, 0, 0, 0, NULL, &bufferStartTime, NULL);
if(err)
{
char sErr[10];
printf("AQHandler.m fillAudioBuffer: Could not enqueue buffer 0x%x: %d %s.", buffer, err, FormatError(sErr, err));
}
}
}
int getNextAudio(video_data_t* vInst, int maxlength, uint8_t* buf, int* pts, int* isDone)
{
struct video_context_t *ctx = vInst->context;
int datalength = 0;
while(ctx->audio_ring.lock || (ctx->audio_ring.count <= 0 && ((ctx->play_state & STATE_DIE) != STATE_DIE)))
{
if (ctx->play_state & STATE_EOF) return -1;
usleep(100);
}
*pts = 0;
ctx->audio_ring.lock = kLocked;
if(ctx->audio_ring.count>0 && maxlength > ctx->audio_buffer[ctx->audio_ring.read].size)
{
memcpy(buf, ctx->audio_buffer[ctx->audio_ring.read].data,ctx->audio_buffer[ctx->audio_ring.read].size);
*pts = ctx->audio_buffer[ctx->audio_ring.read].pts;
datalength = ctx->audio_buffer[ctx->audio_ring.read].size;
ctx->audio_ring.read++;
ctx->audio_ring.read %= ABUF_SIZE;
ctx->audio_ring.count--;
}
ctx->audio_ring.lock = kUnlocked;
if((ctx->play_state & STATE_EOF) == STATE_EOF && ctx->audio_ring.count == 0) *isDone = 1;
return datalength;
} -
Decoding a h264 (High) stream with OpenCV's ffmpeg on Ubuntu
9 janvier 2017, par arvidsI am working with a video stream (no audio) from an ip camera on Ubuntu 14.04. Also i am a beginner with Ubuntu and everything on it. Everything was going great with a camera that has these parameters (from FFMPEG) :
Input #0, rtsp, from 'rtsp://*private*:8900/live.sdp': 0B f=0/0
Metadata:
title : RTSP server
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 352x192, 29.97 tbr, 90k tbn, 180k tbcBut then i changed to a newer camera, which has these parameters :
Input #0, rtsp, from 'rtsp://*private*/media/video2':0B f=0/0
Metadata:
title : VCP IPC Realtime stream
Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1280x720, 25 fps, 25 tbr, 90k tbn, 50 tbcMy C++ program uses OpenCV3 to process the stream. By default OpenCV uses ffmpeg to decode and display the stream with function VideoCapture.
VideoCapture vc;
vc.open(input_stream);
while ((vc >> frame), !frame.empty()) {
*do work*
}With the new camera stream i get errors like these (from ffmpeg) :
[h264 @ 0x7c6980] cabac decode of qscale diff failed at 41 38
[h264 @ 0x7c6980] error while decoding MB 41 38, bytestream (3572)
[h264 @ 0x7c6980] left block unavailable for requested intra mode at 0 44
[h264 @ 0x7c6980] error while decoding MB 0 44, bytestream (4933)
[h264 @ 0x7bc2c0] SEI type 25 truncated at 208
[h264 @ 0x7bfaa0] SEI type 25 truncated at 206
[h264 @ 0x7c6980] left block unavailable for requested intra mode at 0 18
[h264 @ 0x7c6980] error while decoding MB 0 18, bytestream (14717)The image sometimes is glitched, sometimes completely frozen. After a few seconds to a few minutes the stream freezes completely without an error. However on vlc it plays perfectly. I installed the newest version (3.2.2) of ffmpeg player with
./configure --enable-gpl --enable-libx264
Now playing directly with ffplay (instead of launching from source code with OpenCV function VideoCapture), the stream plays better, doesn’t freeze, but sometimes still displays warnings :
[NULL @ 0x7f834c008c00] SEI type 25 size 896 truncated at 320=1/1
[h264 @ 0x7f834c0d5d20] SEI type 25 size 896 truncated at 319=1/1
[rtsp @ 0x7f834c0008c0] max delay reached. need to consume packet
[rtsp @ 0x7f834c0008c0] RTP: missed 1 packets
[h264 @ 0x7f834c094740] concealing 675 DC, 675 AC, 675 MV errors in P frame
[NULL @ 0x7f834c008c00] SEI type 25 size 896 truncated at 320=1/1Changing the camera hardware is not an option. The camera can be set to encode to h265 or mjpeg. When encoding to mjpeg it can output 5 fps, which is not enough. Decoding to a static video is not an option either, because i need to display real time results about the stream. Here is a list of API backends that can be used in function VideoCapture. Maybe i should swithc to some other decoder and player ?
From my research i conclude that i have these options :-
Somehow get OpenCV to use the ffmpeg player from another directory, where it is compiled with libx264
-
Somehow get OpenCV to use libVlc instead of ffmpeg
One example of switching to vlc is here, but i don’t understand it well enough to say if that is what i need. Or maybe i should be parsing the stream in code ? I don’t rule out that this could be some basic problem due to a lack of dependencies, because, as i said, i’m a beginner with Ubuntu.
- Use vlc to preprocess the stream, as suggested here.
This is probably slow, which again is bad for real time results.
Any suggestions and coments will be appreciated. -