
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (58)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (8696)
-
How to understand the given ffplay C code snippet ?
20 juillet 2015, par Jerikc XIONGThe following code snippet is from ffplay :
static int decoder_decode_frame(Decoder *d, AVFrame *frame, AVSubtitle *sub) {
int got_frame = 0;
do {
int ret = -1;
if (d->queue->abort_request)
return -1;
if (!d->packet_pending || d->queue->serial != d->pkt_serial) {
AVPacket pkt;
do {
if (d->queue->nb_packets == 0)
SDL_CondSignal(d->empty_queue_cond);
if (packet_queue_get(d->queue, &pkt, 1, &d->pkt_serial) < 0)
return -1;
if (pkt.data == flush_pkt.data) {
avcodec_flush_buffers(d->avctx);
d->finished = 0;
d->next_pts = d->start_pts;
d->next_pts_tb = d->start_pts_tb;
}
} while (pkt.data == flush_pkt.data || d->queue->serial != d->pkt_serial);
av_free_packet(&d->pkt);
d->pkt_temp = d->pkt = pkt;
d->packet_pending = 1;
}
switch (d->avctx->codec_type) {
case AVMEDIA_TYPE_VIDEO:
ret = avcodec_decode_video2(d->avctx, frame, &got_frame, &d->pkt_temp);
if (got_frame) {
if (decoder_reorder_pts == -1) {
frame->pts = av_frame_get_best_effort_timestamp(frame);
} else if (decoder_reorder_pts) {
frame->pts = frame->pkt_pts;
} else {
frame->pts = frame->pkt_dts;
}
}
break;
case AVMEDIA_TYPE_AUDIO:
ret = avcodec_decode_audio4(d->avctx, frame, &got_frame, &d->pkt_temp);
if (got_frame) {
AVRational tb = (AVRational){1, frame->sample_rate};
if (frame->pts != AV_NOPTS_VALUE)
frame->pts = av_rescale_q(frame->pts, d->avctx->time_base, tb);
else if (frame->pkt_pts != AV_NOPTS_VALUE)
frame->pts = av_rescale_q(frame->pkt_pts, av_codec_get_pkt_timebase(d->avctx), tb);
else if (d->next_pts != AV_NOPTS_VALUE)
frame->pts = av_rescale_q(d->next_pts, d->next_pts_tb, tb);
if (frame->pts != AV_NOPTS_VALUE) {
d->next_pts = frame->pts + frame->nb_samples;
d->next_pts_tb = tb;
}
}
break;
case AVMEDIA_TYPE_SUBTITLE:
ret = avcodec_decode_subtitle2(d->avctx, sub, &got_frame, &d->pkt_temp);
break;
}
if (ret < 0) {
d->packet_pending = 0;
} else {
d->pkt_temp.dts =
d->pkt_temp.pts = AV_NOPTS_VALUE;
if (d->pkt_temp.data) {
if (d->avctx->codec_type != AVMEDIA_TYPE_AUDIO)
ret = d->pkt_temp.size;
d->pkt_temp.data += ret;
d->pkt_temp.size -= ret;
if (d->pkt_temp.size <= 0)
d->packet_pending = 0;
} else {
if (!got_frame) {
d->packet_pending = 0;
d->finished = d->pkt_serial; // FLAG
}
}
}
} while (!got_frame && !d->finished);
return got_frame;
}It’s difficult for me to understand the following code :
d->finished = d->pkt_serial; // FLAG
Can anyone help me ?
Thanks.
-
Specify timestamp in ffmpeg video segment command
25 mars 2019, par SoumyaI have a continuous RTSP stream coming from a camera over the network.
I want to dump the stream but in video files of length 1 min each.I an using the following command
ffmpeg -i "rtsp://user:pass@example.com" -f mp4 -r 12 -s 640x480 -ar 44100 \
-ac 1 -segment_time 60 -segment_format mp4 "out%03d.mp4"The name of the files being created are of the form
out001.mp4
,out002.mp4
, etc.I want to include the timestamp (hour and minute) in the name of the file segments eg.
09-30.mp4
,09-31.mp4
, etc.If it is mandatory to provide a serial number for the segment, is it possible to get something like
09-30-001.mp4
,09-31-002
.mp4 ? -
rtsp to rtmp using ffmpeg or any tool wrapper
27 août 2017, par ChakriI have a requirement where I need to restream the RTSP stream from camera source to RTMP server. I know this may sound a repeated question but my exact scenario is I cannot do it manually over command line with ffmpeg command. I need a wrapper where I receive the rtsp and rtmp url from external source say through REST invocation. Then the code can trigger the ffmpeg restream.
Basically flow is like this :
- Camera source application sends RTSP read event(could be basic HTTP(REST) request with RTSP url, metadata about camera info, serial no etc) to my streamer app
Ex : /usr/bin/ffmpeg -i rtsp ://10.144.11.22:554/stream1 -f flv rtmp ://10.13.11.121:1935/stream1
-
Streamer app computes the RTMP server url for corresponding camera and triggers a ffmpeg command to stream RTSP to RTMP
-
Streamer app triggers above(2) in separate thread and keeps reading the logs for monitoring purpose. Also identifies the end of RTSP stream and sends an update(Example : RTSP END) event to UI
Now at point(2) I need a suggestion. Here I need a stable wrapper/api which can help. I tried this through some Java wrappers but the process hangs or fails to read the output from ffmpeg. Also I need to handle streams from many cameras where spawning thread for each one could be exhaustive.
So I am looking for some similar api/wrapper in C++ or Go Lang which might have more closer interaction in handling ffmpeg command.
Please point if similar issue is addressed elsewhere