
Recherche avancée
Autres articles (40)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (5584)
-
The First Problem
19 janvier 2011, par Multimedia Mike — HTML5A few years ago, The Linux Hater made the following poignant observation regarding Linux driver support :
Drivers are only just the beginning... But for some reason y’all like to focus on the drivers. You know why lusers do that ? Because it just happens to be the problem that people notice first.
And so it is with the HTML5 video codec debate, re-invigorated in the past week by Google’s announcement of dropping native H.264 support in their own HTML5 video tag implementation. As I read up on the fiery debate, I kept wondering why people are so obsessed with this issue. Then I remembered the Linux Hater’s post and realized that the video codec issue is simply the first problem that most people notice regarding HTML5 video.
I appreciate that the video codec debate has prompted Niedermayer to post on his blog once more. Otherwise, I’m just munching popcorn on the sidelines, amused and mildly relieved that the various factions are vociferously attacking each other rather than that little project I help with at work.
Getting back to the "first problem" aspect— there’s so much emphasis on the video codec ; I wonder why no one ever, ever mentions word one about an audio codec. AAC is typically the codec that pairs with H.264 in the MPEG stack. Dark Shikari once mentioned that "AAC’s licensing terms are exponentially more onerous than H.264′s. If Google didn’t want to use H.264, they would sure as hell not want to use AAC." Most people are probably using "H.264" to refer to the entire MPEG/H.264/AAC stack, even if they probably don’t understand what all of those pieces mean.
Anyway, The Linux Hater’s driver piece continues :
Once y’all have drivers, the fight will move to the next layer up. And like I said, it’s a lot harder at that layer.
A few months ago, when I wanted to post the WebM output of my new VP8 encoder and thought it would be a nice touch to deliver it via a video tag, I ignored the video codec problem (just encoded a VP8/WebM file) only to immediately discover a problem at a different layer— specifically, embedding a file using a video tag triggers a full file download when the page is loaded, which is unacceptable from end user and web hosting perspectives. This is a known issue but doesn’t get as much attention, I guess because there are bigger problems to solve first (c.f. video codec issue).
For other issues, check out the YouTube blog’s HTML5 post or Hulu’s post that also commented on HTML5. Issues such as video streaming flexibility, content protection, fullscreen video, webcam/microphone input, and numerous others are rarely mentioned in the debates. Only "video codec" is of paramount importance.
But I’m lending too much weight to the cacophony of a largely uninformed internet debate. Realistically, I know there are many talented engineers down in the trenches working to solve at least some of these problems. To tie this in with the Linux driver example, I’m consistently stunned these days regarding how simple it is to get Linux working on a new computer— most commodity consumer hardware really does just work right out of the box. Maybe one day, we’ll wake up and find that HTML5 video has advanced to the point that it solves all of the relevant problems to make it the simple and obvious choice for delivering web video in nearly all situations.
It won’t be this year.
-
FFmpeg cant recognize 3 channels with each 32 bit
4 avril 2022, par ChryfiI am writing the linearized depth buffer of a game to openEXR using FFmpeg. Unfortunately, FFmpeg does not adhere to the openEXR file specification fully (like allowing unsigned integer for one channel) so I am writing one float channel to openEXR, which is put into the green channel with this command
-f rawvideo -pix_fmt grayf32be -s %WIDTH%x%HEIGHT% -r %FPS% -i - -vf %DEFVF% -preset ultrafast -tune zerolatency -qp 6 -compression zip1 -pix_fmt gbrpf32le %NAME%_depth_%d.exr
.

The float range is from 0F to 1F and it is linear. I can confirm that the calculation and linearization is correct by testing 16 bit integer (per pixel component) PNG in Blender compositor. The 16 bit integer data is written like this
short s = (short) (linearzieDepth(depth) * (Math.pow(2,16) - 1))
whereas for float the linearized value is directly written to OpenEXR without multiplying with a value.

However, when viewing the openEXR file it doesn't have the same "gradient" as the 16 bit png... when viewing them side by side, it appears as if the values near 0 are not linear, and they are not as dark as they should be like in the 16 bit png.
(And yes, I set the image node to linear), and comparing it with 3d tracking data from the game I cant reproduce the depth and cant mask things using the depth buffer where as with the png I can.


How is it possible for a linear float range to turn out so different to a linear integer range in an image ?


UPDATE :


I now write 3 channels to the ffmpeg with this code


float f2 = this.linearizeDepth(depth);

buffer.putFloat(f2);
buffer.putFloat(0);
buffer.putFloat(0);



the byte buffer is of the size
width * height * 3 * 4
-> 3 channels with each 4 bytes. The command is now-f rawvideo -pix_fmt gbrpf32be -s %WIDTH%x%HEIGHT% -r %FPS% -i - -vf %DEFVF% -preset ultrafast -tune zerolatency -qp 6 -compression zip1 -pix_fmt gbrpf32le %NAME%_depth_%d.exr
which should mean that the input (byte buffer) is expecting 32 bit floats with 3 channels.

FFmpeg is somehow splitting up channels or whatever... could be a bug, could be my fault ?


-
ffmpeg libavfilter acrossfade : I jget just first input in output
29 juillet 2024, par AlexI'm trying to combine two raw audio chunks (s16, 48000, mono) through acrossfade.


I create filter nodes (skip all error checking here) :


avfilter_graph_create_filter(&mediaIn_1,
 avfilter_get_by_name("abuffer"),
 "MediaIn_1",
 "sample_rate=48000:sample_fmt=s16:channel_layout=mono",
 NULL, graph);
avfilter_graph_create_filter(&mediaIn_2,
 avfilter_get_by_name("abuffer"),
 "MediaIn_2",
 "sample_rate=48000:sample_fmt=s16:channel_layout=mono",
 NULL, graph);
avfilter_graph_create_filter(&mediaOut,
 avfilter_get_by_name("abuffersink"),
 "MediaOut",
 NULL, NULL, graph);
avfilter_graph_create_filter(&crossfade,
 avfilter_get_by_name("acrossfade"),
 "crossfade", "nb_samples=6000:c1=tri:c2=tri", NULL, graph);



Then I link them in a graph :


avfilter_link(mediaIn_1, 0, crossfade, 0);
avfilter_link(mediaIn_2, 0, crossfade, 1);
avfilter_link(crossfade, 0, mediaOut, 0);
avfilter_graph_config(graph, NULL);



After that I create a frame with all the chunk data :


frame1 = av_frame_alloc();
frame1->format = AV_SAMPLE_FMT_S16;
frame1->nb_samples = buf1sz / 2;
frame1->sample_rate = 48000;
frame1->ch_layout.order = AV_CHANNEL_ORDER_NATIVE;
frame1->ch_layout.nb_channels = 1;
frame1->ch_layout.u.mask = AV_CH_LAYOUT_MONO;
frame1->ch_layout.opaque = NULL;
frame1->pts = 0;
frame1->duration = buf1sz / 2;
frame1->time_base.num = 1;
frame1->time_base.den = 48000;

av_frame_get_buffer(frame1, 0);
memcpy(frame1->buf[0]->data, buf1, buf1sz);



Same for second chunk.
And send frames into each input buffer :


av_buffersrc_add_frame_flags(mediaIn_1, frame1, 0);
av_buffersrc_add_frame_flags(mediaIn_1, NULL, AV_BUFFERSRC_FLAG_PUSH);

...
av_buffersrc_add_frame_flags(mediaIn_2, frame2, 0);
av_buffersrc_add_frame_flags(mediaIn_2, NULL, AV_BUFFERSRC_FLAG_PUSH);



Then I'm getting output frame :


oframe = av_frame_alloc();
av_buffersink_get_frame_flags(mediaOut, oframe, 0);
...
av_frame_unref(oframe);



The oframe contains (frame1->nb_samples - 6000) number of samples instead of (frame1->nb_samples + frame2->nb_samples - something_for_xfade_needs).
Next call to
av_buffersink_get_frame_flags
returns AVERROR_EOF.

What is wrong with this algorithm ?


I tried afade filter with "t=in" input and it works fine. I think the problem is with two inputs. I don't get it.