
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (60)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...) -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
Sur d’autres sites (9777)
-
ffmpeg used vda from os x
13 mai 2014, par user2618420I try to enable hardware decoding project earlier decoded using ffmpeg
ffmpeg has support in hard-copy decoding
Here documentation kotoroya I did https://github.com/dilaroga/ffmpeg-vda/wiki/FFmpeg-vda-usage
my codeenum AVPixelFormat myGetFormatCallback(struct AVCodecContext *ctx, const enum AVPixelFormat * fmt)
{
struct vda_context *vda_ctx;
vda_ctx = (struct vda_context *)malloc(sizeof(vda_context));
vda_ctx->decoder = NULL;
vda_ctx->width = ctx->width;
vda_ctx->height = ctx->height;
vda_ctx->format = 'avc1';
vda_ctx->use_ref_buffer = 1;
switch (ctx->pix_fmt) {
case PIX_FMT_UYVY422:{
vda_ctx->cv_pix_fmt_type = '2vuy';
break;
}
case PIX_FMT_YUYV422:{
vda_ctx->cv_pix_fmt_type = 'yuvs';
break;
}
case PIX_FMT_NV12:{
vda_ctx->cv_pix_fmt_type = '420v';
break;
}
case PIX_FMT_YUV420P:{
vda_ctx->cv_pix_fmt_type = 'y420';
break;
}
default:{
av_log(ctx, AV_LOG_ERROR, "Unsupported pixel format: %d\n", ctx->pix_fmt);
Logger::debug(LOG_LEVEL_ERROR, "Unsupported pixel format: %d", ctx->pix_fmt);
throw AbstractException("Unsupported pixel format");
}
}
int status = ff_vda_create_decoder(vda_ctx, (unsigned char*)ctx->extradata,ctx->extradata_size);
if (status){
Logger::debug(LOG_LEVEL_ERROR, "Error create VDA decoder");
throw AbstractException("Error create VDA decoder");
}else{
ctx->hwaccel_context = vda_ctx;
}
return ctx->pix_fmt;
}
static void release_vda_context(void *opaque, uint8_t *data)
{
vda_buffer_context *vda_context = (vda_buffer_context *)opaque;
av_free(vda_context);
}
int myGetBufferCallback(struct AVCodecContext *s, AVFrame *av_frame, int flags)
{
vda_buffer_context *vda_context = (vda_buffer_context *)av_mallocz(sizeof(*vda_context));
AVBufferRef *buffer = av_buffer_create(NULL, 0, release_vda_context, vda_context, 0);
if( !vda_context || !buffer )
{
av_free(vda_context);
return -1;
}
av_frame->buf[0] = buffer;
av_frame->data[0] = (uint8_t*)1;
return 0;
}
static void release_buffer(struct AVCodecContext *opaque, AVFrame *pic)
{
vda_buffer_context *context = (vda_buffer_context*)opaque;
CVPixelBufferUnlockBaseAddress(context->cv_buffer, 0);
CVPixelBufferRelease(context->cv_buffer);
av_free(context);
}
main(){
//init ff context
if (avcodec_open2(mCodecContext, mCodec, NULL) < 0) throw AbstractException("Unable to open codec");
mCodecContext->get_format = myGetFormatCallback;
mCodecContext->get_buffer2 = myGetBufferCallback;
mCodecContext->release_buffer = release_buffer;
}but I did not myGetFormatCallback the method is called, and called myGetBufferCallback falls
why not called myGetFormatCallback ? what’s wrong ? may not work well at all -
FPS drop in FFMPEG streaming processes to FB from production server
30 janvier 2017, par Aakash GuptaI have made a rails app that can stream live videos to facebook rtmp server and deployed it on AWS. I have used nginx as web server. The major problem that I am encountering after viewing log files of FFMpeg processes is that sometimes the FPS of FFmpeg process starts to drop. In some cases, it remains stable at 25 FPS but in some cases, it remains at 25 only for sometime, and after that it starts to drop and sometimes it falls to even 3-4 FPS which is unacceptable during live streaming. As FFMpeg process is quite heavy, I would also like to share my CPU info as well.
CPU information is :
cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz
stepping : 2
microcode : 0x25
cpu MHz : 2400.070
cache size : 30720 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm xsaveopt fsgsbase bmi1 avx2 smep bmi2 erms invpcid
bogomips : 4800.14
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:FFMPEG log file with unstable fps : https://drive.google.com/open?id=0B1gtp1iXJppkUndFamk4M0lRYzA
FFMPEG log file with stable fps : https://drive.google.com/open?id=0B1gtp1iXJppkMkVCZEJjYWJrVTA
When FPS was stable, I also tried to run another parallel FFMpeg process from the same server which resulted in FPS dropping of both the processes to 13-14 FPS.
I am currently using this FFMPEG command :
ffmpeg -loop 1 -re -y -f image2 -i "image_path" -i "audio_path.aac" -acodec copy -bsf:a aac_adtstoasc -pix_fmt yuv420p -profile:v high -s 1280x720 -vb 400k -maxrate 400k -minrate 400k -bufsize 600k -deinterlace -vcodec libx264 -preset veryfast -g 30 -r 30 -t 14400 -strict -2 -f flv "rtmp_server_link"
I never face this problem when I try to stream to FB using app on my localhost.
So, my questions are :
- What can be the reason for this FPS drop ?
- Can upscaling production server help me fix this issue ?
- Can I run multiple FFMpeg processes for streaming from same server without performance drop ?
Thanks in advance :)
-
PyAV : force new framerate while remuxing stream ?
7 juin 2019, par ToxicFrogI have a Python program that receives a sequence of H264 video frames over the network, which I want to display and, optionally, record. The camera records at 30FPS and sends frames as fast as it can, which isn’t consistently 30FPS due to changing network conditions ; sometimes it falls behind and then catches up, and rarely it drops frames entirely.
The "display" part is easy ; I don’t need to care about timing or stream metadata, just display the frames as fast as they arrive :
input = av.open(get_video_stream())
for packet in input.demux(video=0):
for frame in packet.decode():
# A bunch of numpy and pygame code here to convert the frame to RGB
# row-major and blit it to the screenThe "record" part looks like it should be easy :
input = av.open(get_video_stream())
output = av.open(filename, 'w')
output.add_stream(template=input.streams[0])
for packet in input.demux(video=0):
for frame in packet.decode():
# ...display code...
packet.stream = output.streams[0]
output.mux_one(packet)
output.close()And indeed this produces a valid MP4 file containing all the frames, and if I play it back with
mplayer -fps 30
it works fine. But that-fps 30
is absolutely required :$ ffprobe output.mp4
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 960x720,
1277664 kb/s, 12800 fps, 12800 tbr, 12800 tbn, 25600 tbc (default)Note that 12,800 frames/second. It should look something like this (produced by calling
mencoder -fps 30
and piping the frames into it) :$ ffprobe mencoder_test.mp4
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 960x720,
2998 kb/s, 30 fps, 30 tbr, 90k tbn, 180k tbc (default)Inspecting the packets and frames I get from the input stream, I see :
stream: time_base=1/1200000
codec: framerate=25 time_base=1/50
packet: dts=None pts=None duration=48000 time_base=1/1200000
frame: dst=None pts=None time=None time_base=1/1200000So, the packets and frames don’t have timestamps at all ; they have a time_base which doesn’t match either the timebase that ends up in the final file or the actual framerate of the camera ; the codec has a framrate and timebase that doesn’t match the final file, the camera framerate, or the other video stream metadata !
The PyAV documentation is all but entirely absent when it comes to issues of timing and framerate, but I have tried manually setting various combinations of stream, packet, and frame
time_base
,dts
, andpts
with no success. I can always remux the recorded videos again to get the correct framerate, but I’d rather write video files that are correct in the first place.So, how do I get pyAV to remux the video in a way that produces an output that is correctly marked as 30fps ?