
Recherche avancée
Médias (29)
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (58)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (12459)
-
Anomalie #4438 : Manque Msg :message:lien_reponse_message :
22 mars 2020Ça m’interroge...
Les chaines de langues sont dans ’forum’, là : https://git.spip.net/spip/forum/src/branch/master/lang/forum_fr.php#L129
Donc appeler `_T(’message:lien_reponse_message’)` ne donnera rien, quelque soit la version de SPIP.
Cette chaine (forum:lien_reponse_message) est appelé si le message a un `id_parent`.La question semble plutôt :
- soit `#OBJET` qui vaut ’message’ est erroné (ça devait être autre chose (genre l’objet du parent), mais un bug remplit a rempli ’message’ ?
- soit on avait jamais eu ce cas simplement ? -
FFmpeg cant recognize 3 channels with each 32 bit
4 avril 2022, par ChryfiI am writing the linearized depth buffer of a game to openEXR using FFmpeg. Unfortunately, FFmpeg does not adhere to the openEXR file specification fully (like allowing unsigned integer for one channel) so I am writing one float channel to openEXR, which is put into the green channel with this command
-f rawvideo -pix_fmt grayf32be -s %WIDTH%x%HEIGHT% -r %FPS% -i - -vf %DEFVF% -preset ultrafast -tune zerolatency -qp 6 -compression zip1 -pix_fmt gbrpf32le %NAME%_depth_%d.exr
.

The float range is from 0F to 1F and it is linear. I can confirm that the calculation and linearization is correct by testing 16 bit integer (per pixel component) PNG in Blender compositor. The 16 bit integer data is written like this
short s = (short) (linearzieDepth(depth) * (Math.pow(2,16) - 1))
whereas for float the linearized value is directly written to OpenEXR without multiplying with a value.

However, when viewing the openEXR file it doesn't have the same "gradient" as the 16 bit png... when viewing them side by side, it appears as if the values near 0 are not linear, and they are not as dark as they should be like in the 16 bit png.
(And yes, I set the image node to linear), and comparing it with 3d tracking data from the game I cant reproduce the depth and cant mask things using the depth buffer where as with the png I can.


How is it possible for a linear float range to turn out so different to a linear integer range in an image ?


UPDATE :


I now write 3 channels to the ffmpeg with this code


float f2 = this.linearizeDepth(depth);

buffer.putFloat(f2);
buffer.putFloat(0);
buffer.putFloat(0);



the byte buffer is of the size
width * height * 3 * 4
-> 3 channels with each 4 bytes. The command is now-f rawvideo -pix_fmt gbrpf32be -s %WIDTH%x%HEIGHT% -r %FPS% -i - -vf %DEFVF% -preset ultrafast -tune zerolatency -qp 6 -compression zip1 -pix_fmt gbrpf32le %NAME%_depth_%d.exr
which should mean that the input (byte buffer) is expecting 32 bit floats with 3 channels.

FFmpeg is somehow splitting up channels or whatever... could be a bug, could be my fault ?


-
ffmpeg libavfilter acrossfade : I jget just first input in output
29 juillet 2024, par AlexI'm trying to combine two raw audio chunks (s16, 48000, mono) through acrossfade.


I create filter nodes (skip all error checking here) :


avfilter_graph_create_filter(&mediaIn_1,
 avfilter_get_by_name("abuffer"),
 "MediaIn_1",
 "sample_rate=48000:sample_fmt=s16:channel_layout=mono",
 NULL, graph);
avfilter_graph_create_filter(&mediaIn_2,
 avfilter_get_by_name("abuffer"),
 "MediaIn_2",
 "sample_rate=48000:sample_fmt=s16:channel_layout=mono",
 NULL, graph);
avfilter_graph_create_filter(&mediaOut,
 avfilter_get_by_name("abuffersink"),
 "MediaOut",
 NULL, NULL, graph);
avfilter_graph_create_filter(&crossfade,
 avfilter_get_by_name("acrossfade"),
 "crossfade", "nb_samples=6000:c1=tri:c2=tri", NULL, graph);



Then I link them in a graph :


avfilter_link(mediaIn_1, 0, crossfade, 0);
avfilter_link(mediaIn_2, 0, crossfade, 1);
avfilter_link(crossfade, 0, mediaOut, 0);
avfilter_graph_config(graph, NULL);



After that I create a frame with all the chunk data :


frame1 = av_frame_alloc();
frame1->format = AV_SAMPLE_FMT_S16;
frame1->nb_samples = buf1sz / 2;
frame1->sample_rate = 48000;
frame1->ch_layout.order = AV_CHANNEL_ORDER_NATIVE;
frame1->ch_layout.nb_channels = 1;
frame1->ch_layout.u.mask = AV_CH_LAYOUT_MONO;
frame1->ch_layout.opaque = NULL;
frame1->pts = 0;
frame1->duration = buf1sz / 2;
frame1->time_base.num = 1;
frame1->time_base.den = 48000;

av_frame_get_buffer(frame1, 0);
memcpy(frame1->buf[0]->data, buf1, buf1sz);



Same for second chunk.
And send frames into each input buffer :


av_buffersrc_add_frame_flags(mediaIn_1, frame1, 0);
av_buffersrc_add_frame_flags(mediaIn_1, NULL, AV_BUFFERSRC_FLAG_PUSH);

...
av_buffersrc_add_frame_flags(mediaIn_2, frame2, 0);
av_buffersrc_add_frame_flags(mediaIn_2, NULL, AV_BUFFERSRC_FLAG_PUSH);



Then I'm getting output frame :


oframe = av_frame_alloc();
av_buffersink_get_frame_flags(mediaOut, oframe, 0);
...
av_frame_unref(oframe);



The oframe contains (frame1->nb_samples - 6000) number of samples instead of (frame1->nb_samples + frame2->nb_samples - something_for_xfade_needs).
Next call to
av_buffersink_get_frame_flags
returns AVERROR_EOF.

What is wrong with this algorithm ?


I tried afade filter with "t=in" input and it works fine. I think the problem is with two inputs. I don't get it.