
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (79)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)
Sur d’autres sites (11573)
-
ffmpeg libavfilter acrossfade : I jget just first input in output
29 juillet 2024, par AlexI'm trying to combine two raw audio chunks (s16, 48000, mono) through acrossfade.


I create filter nodes (skip all error checking here) :


avfilter_graph_create_filter(&mediaIn_1,
 avfilter_get_by_name("abuffer"),
 "MediaIn_1",
 "sample_rate=48000:sample_fmt=s16:channel_layout=mono",
 NULL, graph);
avfilter_graph_create_filter(&mediaIn_2,
 avfilter_get_by_name("abuffer"),
 "MediaIn_2",
 "sample_rate=48000:sample_fmt=s16:channel_layout=mono",
 NULL, graph);
avfilter_graph_create_filter(&mediaOut,
 avfilter_get_by_name("abuffersink"),
 "MediaOut",
 NULL, NULL, graph);
avfilter_graph_create_filter(&crossfade,
 avfilter_get_by_name("acrossfade"),
 "crossfade", "nb_samples=6000:c1=tri:c2=tri", NULL, graph);



Then I link them in a graph :


avfilter_link(mediaIn_1, 0, crossfade, 0);
avfilter_link(mediaIn_2, 0, crossfade, 1);
avfilter_link(crossfade, 0, mediaOut, 0);
avfilter_graph_config(graph, NULL);



After that I create a frame with all the chunk data :


frame1 = av_frame_alloc();
frame1->format = AV_SAMPLE_FMT_S16;
frame1->nb_samples = buf1sz / 2;
frame1->sample_rate = 48000;
frame1->ch_layout.order = AV_CHANNEL_ORDER_NATIVE;
frame1->ch_layout.nb_channels = 1;
frame1->ch_layout.u.mask = AV_CH_LAYOUT_MONO;
frame1->ch_layout.opaque = NULL;
frame1->pts = 0;
frame1->duration = buf1sz / 2;
frame1->time_base.num = 1;
frame1->time_base.den = 48000;

av_frame_get_buffer(frame1, 0);
memcpy(frame1->buf[0]->data, buf1, buf1sz);



Same for second chunk.
And send frames into each input buffer :


av_buffersrc_add_frame_flags(mediaIn_1, frame1, 0);
av_buffersrc_add_frame_flags(mediaIn_1, NULL, AV_BUFFERSRC_FLAG_PUSH);

...
av_buffersrc_add_frame_flags(mediaIn_2, frame2, 0);
av_buffersrc_add_frame_flags(mediaIn_2, NULL, AV_BUFFERSRC_FLAG_PUSH);



Then I'm getting output frame :


oframe = av_frame_alloc();
av_buffersink_get_frame_flags(mediaOut, oframe, 0);
...
av_frame_unref(oframe);



The oframe contains (frame1->nb_samples - 6000) number of samples instead of (frame1->nb_samples + frame2->nb_samples - something_for_xfade_needs).
Next call to
av_buffersink_get_frame_flags
returns AVERROR_EOF.

What is wrong with this algorithm ?


I tried afade filter with "t=in" input and it works fine. I think the problem is with two inputs. I don't get it.


-
swscale : add two spatially stable dithering methods
23 mars 2014, par Øyvind Kolåsswscale : add two spatially stable dithering methods
Both of these dithering methods are from http://pippin.gimp.org/a_dither/ for
GIF they can be considered better than bayer (provides more gray-levels), and
spatial stability - often more than twice as good compression and less visual
flicker than error diffusion methods (the methods also avoids error-shadow
artifacts of diffusion dithers).These methods are similar to blue/green noise type dither masks ; but are
simple enough to generate their mask on the fly. They are still research work
in progress ; though more expensive to generate masks (which can be used in a
LUT) like ’void and cluster’ and similar methods will yield superior results -
ffmpeg 180 degree panoramic fisheye image to equirectangular / flat
7 juillet 2024, par Willy62I am trying to get my Hikvision Panovu image of a sportsfield to look like a standard camera image, similar to what would be seen with a Veo solution / traditional camera.


This is what the image would ideally look like with a little bit of zoom. Note the players are all upright and it looks "correct" and not skewed with the far end of the field in line with the horizon.




The original image looks like this (same field but other side). This is a 180 degree panoramic image from a Hikvision camera as found here.


It provides the following output natively.




I have had some luck converting the image with ffmpeg using the v360 filter. Note there is a downward tilt meaning I have to apply some yaw to correct it.


v360=input=fisheye:output=rectilinear:ih_fov=180:iv_fov=87.5:d_fov=87.5:pitch=20:yaw=5:w=3840:h=2160



And this gets the following output :




So the challenge here to make the original image flat/equirect but to address the skew such that :


- 

- the players are orientated "upright"
- the far sideline of the field looks like a straight line in line with the horizon
- the image quality is preserved as best as possible








With these cameras the image is 32MP so there is the opportunity to do an ePTZ into the area of interest.


I suspect v360 isnt the right choice here and it is some remap-style filter, or perhaps I am best going across to gstreamer or similar.


I tried an ffmpeg v360 filter and it partially works, but the players are still skewed because the top of the image is not wide enough. The issue can possibly be solved by correctly applying a couplex perspective filter, but I think this will only mask the issue and perspective requires a complex filter that hasn't worked for me so far.