
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (68)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (8578)
-
Displaying Bitmaps sequentially, "like a video"
15 juillet 2015, par WamasaI’m trying to reproduce a video that is not android supported (like a .wmv video) in my app, and actually I’m able to grab every frame and create a Bitmap of it.
So, now, I’m trying to show those bitmaps in a VideoView (or any other view), sequencially, something like a video.
Some code :
while (true) {
frame = frameGrabber.grab();
if (frame == null)
break;
frame2 =
IplImage.create(frame.width(), frame.height(),
opencv_core.IPL_DEPTH_8U, 4);
opencv_imgproc.cvCvtColor(frame, frame2,
opencv_imgproc.CV_BGR2RGBA);
bm =
Bitmap.createBitmap(frame2.width(),
frame2.height(), Bitmap.Config.ARGB_8888);
bm.copyPixelsFromBuffer(frame2.getByteBuffer());
canvas = new Canvas(bm);
mVideoView.draw(canvas);
canvas.save();It looks like I can grab every frame of the video (using ffmpeg), but I just don’t know how to display them.
By the way, I’ve already tried encoding this video to a .mp4 file and playing it on the VideoView, but it takes much time to process the whole video (1 hour), so, now, I’m trying to display it right away, without encoding it do .mp4 (or any other android supported video)
Any advices ?
-
FFMPEG always stops to stream after 3 min
23 août 2018, par Diogo PerdomoWhen I start
ffmpeg -y -i rtsp ://mycameraip/video.mp4 -an -codec copy file.mp4
ffmpeg stop ever the same time. Whats is happening ?
Return :
Input #0, rtsp, from 'rtsp://<ip>/video.mp4':
Metadata:
title : QStream
comment : QStreaming Media
Duration: N/A, start: -0.066667, bitrate: N/A
Stream #0:0: Video: mpeg4 (Simple Profile), yuv420p, 320x240 [SAR 1:1 DAR 4:3], 15 tbr, 90k tbn, 15 tbc
Stream #0:1: Audio: pcm_mulaw, 8000 Hz, mono, s16, 64 kb/s
Output #0, mp4, to 'file.mp4':
Metadata:
title : QStream
comment : QStreaming Media
encoder : Lavf57.83.100
Stream #0:0: Video: mpeg4 (Simple Profile) (mp4v / 0x7634706D), yuv420p, 320x240 [SAR 1:1 DAR 4:3], q=2-31, 15 tbr, 90k tbn, 90k tbc
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
frame= 750 fps=4.2 q=-1.0 Lsize= 345kB time=00:02:59.47 bitrate= 15.8kbits/s speed= 1x
video:336kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.839756%
</ip> -
Demuxing a video media file with FFMPEG
23 février 2016, par MOHWAfter starting this question Extracting the h264 part of a video file (demuxing) I was actually able to figure out that,
- When I reverted to an older version of FFMPEG (avcodec-55.dll) as against the avcodec-57.dll I was using earlier, the code worked perfectly without any error and the resultant h264 file played with ffplay.
- When I tracked my output when using the avcodec-57.dll version of FFMPEG (most recent version), there was actually an error
"Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead"
after the callavformat_write_header(ofmt_ctx_v, NULL)
and another one after the call toavformat_write_header(ofmt_ctx_a, NULL)
. The program continued executing, the audio was fine but the .h264 file wasn’t.
The output
Press any key to continue . . .
Press any key to continue . . .
Press any key to continue . . .
Press any key to continue . . .
==============Input Video=============
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'sample.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
title : 10153755968775490
encoder : Lavf57.19.100
Duration: 00:01:07.27, start: 0.020021, bitrate: 1058 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 400x224 [
SAR 199:200 DAR 199:112], 927 kb/s, 15 fps, 15 tbr, 15360 tbn, 30 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(eng): Audio: mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, s16p, 12
8 kb/s (default)
Metadata:
handler_name : SoundHandler
==============Output Video============
Output #0, h264, to 'sample.h264':
Stream #0:0: Video: h264 (High), yuv420p, 400x224 [SAR 199:200 DAR 199:112],
q=2-31, 927 kb/s, 30 tbc
==============Output Audio============
Output #0, mp3, to 'sample.mp3':
Stream #0:0: Audio: mp3, 48000 Hz, stereo, s16p, 128 kb/s
======================================
[h264 @ 00a3ee20] Using AVStream.codec.time_base as a timebase hint to the muxer
is deprecated. Set AVStream.time_base instead.
[mp3 @ 00a4fec0] Using AVStream.codec.time_base as a timebase hint to the muxer
is deprecated. Set AVStream.time_base instead.
Press any key to continue . . .The code
#include
#define __STDC_CONSTANT_MACROS
extern "C"
{
#include "libavformat/avformat.h"
}
#define USE_H264BSF 1
int main()
{
AVOutputFormat *ofmt_a = NULL,*ofmt_v = NULL;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx_a = NULL, *ofmt_ctx_v = NULL;
AVPacket pkt;
int ret, i;
int videoindex=-1,audioindex=-1;
int frame_index=0;
const char *in_filename = "sample.mp4";
const char *out_filename_v = "sample.h264";
const char *out_filename_a = "sample.mp3";
av_register_all();
//Input
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
printf( "Could not open input file.");
goto end;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
printf( "Failed to retrieve input stream information");
goto end;
}
system("pause");
//Output
avformat_alloc_output_context2(&ofmt_ctx_v, NULL, NULL, out_filename_v);
if (!ofmt_ctx_v) {
printf( "Could not create output context\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ofmt_v = ofmt_ctx_v->oformat;
system("pause");
avformat_alloc_output_context2(&ofmt_ctx_a, NULL, NULL, out_filename_a);
if (!ofmt_ctx_a) {
printf( "Could not create output context\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ofmt_a = ofmt_ctx_a->oformat;
system("pause");
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
//Create output AVStream according to input AVStream
AVFormatContext *ofmt_ctx;
AVStream *in_stream = ifmt_ctx->streams[i];
AVStream *out_stream = NULL;
if(ifmt_ctx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO){
videoindex=i;
out_stream=avformat_new_stream(ofmt_ctx_v, in_stream->codec->codec);
ofmt_ctx=ofmt_ctx_v;
}else if(ifmt_ctx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO){
audioindex=i;
out_stream=avformat_new_stream(ofmt_ctx_a, in_stream->codec->codec);
ofmt_ctx=ofmt_ctx_a;
}else{
break;
}
if (!out_stream) {
printf( "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
goto end;
}
//Copy the settings of AVCodecContext
if (avcodec_copy_context(out_stream->codec, in_stream->codec) < 0) {
printf( "Failed to copy context from input to output stream codec context\n");
goto end;
}
out_stream->codec->codec_tag = 0;
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
system("pause");
//Dump Format------------------
printf("\n==============Input Video=============\n");
av_dump_format(ifmt_ctx, 0, in_filename, 0);
printf("\n==============Output Video============\n");
av_dump_format(ofmt_ctx_v, 0, out_filename_v, 1);
printf("\n==============Output Audio============\n");
av_dump_format(ofmt_ctx_a, 0, out_filename_a, 1);
printf("\n======================================\n");
//Open output file
if (!(ofmt_v->flags & AVFMT_NOFILE)) {
if (avio_open(&ofmt_ctx_v->pb, out_filename_v, AVIO_FLAG_WRITE) < 0) {
printf( "Could not open output file '%s'", out_filename_v);
goto end;
}
}
if (!(ofmt_a->flags & AVFMT_NOFILE)) {
if (avio_open(&ofmt_ctx_a->pb, out_filename_a, AVIO_FLAG_WRITE) < 0) {
printf( "Could not open output file '%s'", out_filename_a);
goto end;
}
}
//Write file header
if (avformat_write_header(ofmt_ctx_v, NULL) < 0) {
printf( "Error occurred when opening video output file\n");
goto end;
}
if (avformat_write_header(ofmt_ctx_a, NULL) < 0) {
printf( "Error occurred when opening audio output file\n");
goto end;
}
system("pause");
#if USE_H264BSF
AVBitStreamFilterContext* h264bsfc = av_bitstream_filter_init("h264_mp4toannexb");
#endif
while (1) {
AVFormatContext *ofmt_ctx;
AVStream *in_stream, *out_stream;
//Get an AVPacket
if (av_read_frame(ifmt_ctx, &pkt) < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];
if(pkt.stream_index==videoindex){
out_stream = ofmt_ctx_v->streams[0];
ofmt_ctx=ofmt_ctx_v;
#if USE_H264BSF
av_bitstream_filter_filter(h264bsfc, in_stream->codec, NULL, &pkt.data, &pkt.size, pkt.data, pkt.size, 0);
#endif
printf("Write Video Packet. size:%d\tpts:%lld\n",pkt.size,pkt.pts);
}else if(pkt.stream_index==audioindex){
out_stream = ofmt_ctx_a->streams[0];
ofmt_ctx=ofmt_ctx_a;
printf("Write Audio Packet. size:%d\tpts:%lld\n",pkt.size,pkt.pts);
}else{
continue;
}
//Convert PTS/DTS
pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
pkt.pos = -1;
pkt.stream_index=0;
//Write
if (av_interleaved_write_frame(ofmt_ctx, &pkt) < 0) {
printf( "Error muxing packet\n");
break;
}
//printf("Write %8d frames to output file\n", frame_index);
av_free_packet(&pkt);
frame_index++;
}
system("pause");
#if USE_H264BSF
av_bitstream_filter_close(h264bsfc);
#endif
//Write file trailer
av_write_trailer(ofmt_ctx_a);
av_write_trailer(ofmt_ctx_v);
end:
avformat_close_input(&ifmt_ctx);
/* close output */
if (ofmt_ctx_a && !(ofmt_a->flags & AVFMT_NOFILE))
avio_close(ofmt_ctx_a->pb);
if (ofmt_ctx_v && !(ofmt_v->flags & AVFMT_NOFILE))
avio_close(ofmt_ctx_v->pb);
avformat_free_context(ofmt_ctx_a);
avformat_free_context(ofmt_ctx_v);
system("pause");
if (ret < 0 && ret != AVERROR_EOF) {
printf( "Error occurred.\n");
return -1;
}
return 0;
}EDIT 1
After reading through http://lists.libav.org/pipermail/libav-devel/2014-June/060048.html, I figured what was causing the"Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead"
. I fixed it by addingout_stream->time_base = in_stream->time_base;
before the call toavformat_write_header
. The code now runs without any error ! The h264 file created with the old FFMPEG (avcodec-55.dll) works fine while that created by the recent avcodec-57.dll is still invalid.