
Recherche avancée
Autres articles (56)
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)
Sur d’autres sites (10115)
-
Pass individual frames as BGRA byte array and set the timestamps via pipe to FFmpeg
30 juillet 2023, par Nicke ManarinI have a set of images (as BGRA
byte[]
) with their respective timestamps in milliseconds and I want to pass it to FFmpeg to build an animation.

I'm using FFmpeg v6 right now and in this example I'm expecting a GIF as output, but I'm going to export to multiple formats later.


var arguments = "-vsync passthrough 
-f rawvideo 
-pix_fmt bgra 
-video_size {width}x{height} 
-i - 
-loop 0 
-lavfi palettegen=stats_mode=diff[pal],[0:v][pal]paletteuse=new=1:dither=sierra2_4a:diff_mode=rectangle 
-f gif 
-y \"C:\Users\User\Desktop\test.gif\"";

var process = new Process
{
 StartInfo = new ProcessStartInfo
 {
 FileName = "./ffmpeg.exe",
 Arguments = arguments.Replace("{width}", width.ToString()).Replace("{height}", height.ToString()),
 RedirectStandardInput = true,
 RedirectStandardOutput = true,
 UseShellExecute = false,
 CreateNoWindow = true
 }
};

_process.Start();




Then on my render loop, I'm trying to send the frames and their timestamps one by one.


public void EncodeFrame(IntPtr bufferAddress, int bufferStride, int width, int height, int index, long timestamp, int delay)
{
 var frameSize = height * bufferStride;
 var frameBytes = new byte[frameSize];
 System.Runtime.InteropServices.Marshal.Copy(bufferAddress, frameBytes, 0, frameSize);

 _process.StandardInput.BaseStream.Write(frameBytes, 0, frameSize);
 _process.StandardInput.BaseStream.Write(_delimiter, 0, _delimiter.Length);
 _process.StandardInput.BaseStream.Write(BitConverter.GetBytes(timestamp), 0, sizeof(long));
}



The issue is that I'm getting an IOException (The pipe has been ended), so probably I'm not sending the frames correctly (not sending the delimiter and timestamp doesn't help).


Is this even possible ?


-
Setting individual pixels of an RGB frame for ffmpeg encoding
15 mai 2013, par Camille GoudeseuneI'm trying to change the test pattern of an ffmpeg streamer, Trouble syncing libavformat/ffmpeg with x264 and RTP , into familiar RGB format. My broader goal is to compute frames of a streamed video on the fly.
So I replaced its
AV_PIX_FMT_MONOWHITE
withAV_PIX_FMT_RGB24
, which is "packed RGB 8:8:8, 24bpp, RGBRGB..." according to http://libav.org/doxygen/master/pixfmt_8h.html .To stuff its pixel array called
data
, I've tried many variations onfor (int y=0; y/ const double j = y/double(HEIGHT);
rgb[0] = 255*i;
rgb[1] = 0;
rgb[2] = 255*(1-i);
}
}At
HEIGHT
xWIDTH
= 80x60, this version yields
, when I expect a single blue-to-red horizontal gradient.
640x480 yields the same 4-column pattern, but with far more horizontal stripes.
640x640, 160x160, etc, yield three columns, cyan-ish / magenta-ish / yellow-ish, with the same kind of horizontal stripiness.
Vertical gradients behave even more weirdly.
Appearance was unaffected by an
AV_PIX_FMT_RGBA
attempt (4 not 3 bytes per pixel, alpha=255). Also unaffected by a port from C to C++.The argument
srcStrides
passed tosws_scale()
is a length-1 array, containing the single intHEIGHT
.Access each Pixel of AVFrame asks the same question in less detail, so far unanswered.
The streamer emits one warning, which I doubt affects appearance :
[rtp @ 0x269c0a0] Encoder did not produce proper pts, making some up.
So. How do you set the RGB value of a pixel in a frame to be sent to sws_scale() (and then to x264_encoder_encode() and av_interleaved_write_frame()) ?
-
How do I toggle individual codec options in libavcodec (specifically h264_options)
2 juillet 2020, par John AllardI'm trying to figure out how to enable
enable_er
option as is defined inh264dec.c
in libavcodec. This is defined as anAVOption
as part of theAVCodec.priv_class.option
field. I can't figure out if this is some sort of compile-time option or if it's an option that I can enable via theav_dict_set
method when initializing anAVCodec
viaavcodec_open2
.

I'm talking about these options in
h264dec.c


#define OFFSET(x) offsetof(H264Context, x)
#define VD AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_DECODING_PARAM
static const AVOption h264_options[] = {
 { "is_avc", "is avc", OFFSET(is_avc), AV_OPT_TYPE_BOOL, {.i64 = 0}, 0, 1, 0 },
 { "nal_length_size", "nal_length_size", OFFSET(nal_length_size), AV_OPT_TYPE_INT, {.i64 = 0}, 0, 4, 0 },
 { "enable_er", "Enable error resilience on damaged frames (unsafe)", OFFSET(enable_er), AV_OPT_TYPE_BOOL, { .i64 = -1 }, -1, 1, VD },
 { NULL },
};

static const AVClass h264_class = {
 .class_name = "H264 Decoder",
 .item_name = av_default_item_name,
 .option = h264_options,
 .version = LIBAVUTIL_VERSION_INT,
};

AVCodec ff_h264_decoder = {
 .name = "h264",
 .long_name = NULL_IF_CONFIG_SMALL("H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10"),
 .type = AVMEDIA_TYPE_VIDEO,
 .id = AV_CODEC_ID_H264,
 .priv_data_size = sizeof(H264Context),
 .init = h264_decode_init,
 .close = h264_decode_end,
 .decode = h264_decode_frame,
 .capabilities = /*AV_CODEC_CAP_DRAW_HORIZ_BAND |*/ AV_CODEC_CAP_DR1 |
 AV_CODEC_CAP_DELAY | AV_CODEC_CAP_SLICE_THREADS |
 AV_CODEC_CAP_FRAME_THREADS,
 .caps_internal = FF_CODEC_CAP_INIT_THREADSAFE | FF_CODEC_CAP_EXPORTS_CROPPING,
 .flush = flush_dpb,
 .init_thread_copy = ONLY_IF_THREADS_ENABLED(decode_init_thread_copy),
 .update_thread_context = ONLY_IF_THREADS_ENABLED(ff_h264_update_thread_context),
 .profiles = NULL_IF_CONFIG_SMALL(ff_h264_profiles),
 .priv_class = &h264_class,
};