
Recherche avancée
Autres articles (99)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (12351)
-
Revision 32970 : On progresse doucement mais surement.
15 novembre 2009, par vxl@… — LogOn progresse doucement mais surement.
-
FFmpeg cant recognize 3 channels with each 32 bit
4 avril 2022, par ChryfiI am writing the linearized depth buffer of a game to openEXR using FFmpeg. Unfortunately, FFmpeg does not adhere to the openEXR file specification fully (like allowing unsigned integer for one channel) so I am writing one float channel to openEXR, which is put into the green channel with this command
-f rawvideo -pix_fmt grayf32be -s %WIDTH%x%HEIGHT% -r %FPS% -i - -vf %DEFVF% -preset ultrafast -tune zerolatency -qp 6 -compression zip1 -pix_fmt gbrpf32le %NAME%_depth_%d.exr
.

The float range is from 0F to 1F and it is linear. I can confirm that the calculation and linearization is correct by testing 16 bit integer (per pixel component) PNG in Blender compositor. The 16 bit integer data is written like this
short s = (short) (linearzieDepth(depth) * (Math.pow(2,16) - 1))
whereas for float the linearized value is directly written to OpenEXR without multiplying with a value.

However, when viewing the openEXR file it doesn't have the same "gradient" as the 16 bit png... when viewing them side by side, it appears as if the values near 0 are not linear, and they are not as dark as they should be like in the 16 bit png.
(And yes, I set the image node to linear), and comparing it with 3d tracking data from the game I cant reproduce the depth and cant mask things using the depth buffer where as with the png I can.


How is it possible for a linear float range to turn out so different to a linear integer range in an image ?


UPDATE :


I now write 3 channels to the ffmpeg with this code


float f2 = this.linearizeDepth(depth);

buffer.putFloat(f2);
buffer.putFloat(0);
buffer.putFloat(0);



the byte buffer is of the size
width * height * 3 * 4
-> 3 channels with each 4 bytes. The command is now-f rawvideo -pix_fmt gbrpf32be -s %WIDTH%x%HEIGHT% -r %FPS% -i - -vf %DEFVF% -preset ultrafast -tune zerolatency -qp 6 -compression zip1 -pix_fmt gbrpf32le %NAME%_depth_%d.exr
which should mean that the input (byte buffer) is expecting 32 bit floats with 3 channels.

FFmpeg is somehow splitting up channels or whatever... could be a bug, could be my fault ?


-
ffmpeg libavfilter acrossfade : I jget just first input in output
29 juillet 2024, par AlexI'm trying to combine two raw audio chunks (s16, 48000, mono) through acrossfade.


I create filter nodes (skip all error checking here) :


avfilter_graph_create_filter(&mediaIn_1,
 avfilter_get_by_name("abuffer"),
 "MediaIn_1",
 "sample_rate=48000:sample_fmt=s16:channel_layout=mono",
 NULL, graph);
avfilter_graph_create_filter(&mediaIn_2,
 avfilter_get_by_name("abuffer"),
 "MediaIn_2",
 "sample_rate=48000:sample_fmt=s16:channel_layout=mono",
 NULL, graph);
avfilter_graph_create_filter(&mediaOut,
 avfilter_get_by_name("abuffersink"),
 "MediaOut",
 NULL, NULL, graph);
avfilter_graph_create_filter(&crossfade,
 avfilter_get_by_name("acrossfade"),
 "crossfade", "nb_samples=6000:c1=tri:c2=tri", NULL, graph);



Then I link them in a graph :


avfilter_link(mediaIn_1, 0, crossfade, 0);
avfilter_link(mediaIn_2, 0, crossfade, 1);
avfilter_link(crossfade, 0, mediaOut, 0);
avfilter_graph_config(graph, NULL);



After that I create a frame with all the chunk data :


frame1 = av_frame_alloc();
frame1->format = AV_SAMPLE_FMT_S16;
frame1->nb_samples = buf1sz / 2;
frame1->sample_rate = 48000;
frame1->ch_layout.order = AV_CHANNEL_ORDER_NATIVE;
frame1->ch_layout.nb_channels = 1;
frame1->ch_layout.u.mask = AV_CH_LAYOUT_MONO;
frame1->ch_layout.opaque = NULL;
frame1->pts = 0;
frame1->duration = buf1sz / 2;
frame1->time_base.num = 1;
frame1->time_base.den = 48000;

av_frame_get_buffer(frame1, 0);
memcpy(frame1->buf[0]->data, buf1, buf1sz);



Same for second chunk.
And send frames into each input buffer :


av_buffersrc_add_frame_flags(mediaIn_1, frame1, 0);
av_buffersrc_add_frame_flags(mediaIn_1, NULL, AV_BUFFERSRC_FLAG_PUSH);

...
av_buffersrc_add_frame_flags(mediaIn_2, frame2, 0);
av_buffersrc_add_frame_flags(mediaIn_2, NULL, AV_BUFFERSRC_FLAG_PUSH);



Then I'm getting output frame :


oframe = av_frame_alloc();
av_buffersink_get_frame_flags(mediaOut, oframe, 0);
...
av_frame_unref(oframe);



The oframe contains (frame1->nb_samples - 6000) number of samples instead of (frame1->nb_samples + frame2->nb_samples - something_for_xfade_needs).
Next call to
av_buffersink_get_frame_flags
returns AVERROR_EOF.

What is wrong with this algorithm ?


I tried afade filter with "t=in" input and it works fine. I think the problem is with two inputs. I don't get it.