
Recherche avancée
Autres articles (96)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (8012)
-
12 ways Matomo Analytics helps you to protect your visitor’s privacy
-
How to interpret ffmpeg recording options available for a webcam (directshow) ?
5 janvier 2023, par Jones659I am trying to create a GUI for personal use, that allows someone to customise recording and converting options of ffmpeg, without directly using the command line. At the moment, I am learning about different parameters and flags in ffmpeg.


Apologies in advance if I end up asking some stupid questions, I am on a learning journey at the moment, unfortunately not all of this info is available online in an easily understandable way.


I have a USB webcam which reported having the following options available to it :


[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=640x480 fps=5 max s=640x480 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=640x480 fps=5 max s=640x480 fps=30 (tv, bt470bg/bt709/unknown, topleft) chroma_location=topleft
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=352x288 fps=5 max s=352x288 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=352x288 fps=5 max s=352x288 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=320x240 fps=5 max s=320x240 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=320x240 fps=5 max s=320x240 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=176x144 fps=5 max s=176x144 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=176x144 fps=5 max s=176x144 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=160x120 fps=5 max s=160x120 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=160x120 fps=5 max s=160x120 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=1280x1024 fps=5 max s=1280x1024 fps=9
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=1280x1024 fps=5 max s=1280x1024 fps=9 (tv, bt470bg/bt709/unknown, topleft)



I just want to get to the bottom of how I should interpret this, apologies that I will ask multiple questions :


- 

-
The fact that both resolution and fps have a min and max value (for every option) seems to imply that these two parameters are supposably uncontrollably variable, right ? In practice, the fps has been variable depending on brightness, however the resolution has not been - is it safe to assume that video imaging devices (especially such as a webcam) do not have variable resolution ?


-
Secondly, why is it that every option is listed twice, except half of them specify extra info, such as color_range, color_space, and chroma_location ? Is this just a quirk ? Surely those extra parameter options should not be discarded ?


-
It's hard to know how to make sense of this, but or example : the fact that only "tv" is ever shown, does that impliy that the webcam can only ever do limited color range, and there is no point trying to get full 0,255 out of it ? I read somewhere that "pc" implies full range of 0-255, whereas "tv" implies a range of 16-235


-
With regards to color space, is it acceptable to record the webcam as raw (un-encoded), and then later convert to a different color space later down the line ? Which approach to dealing with the color-space yields the least amount of lost color ? My only previous experience with color spaces is in the realm of images - where for example, it makes no sense to convert sRGB to ROMM16 RGB, because you're going to a color space which has wider coverage, and extra colors won't be created out of thin air, you'd want to go once from raw to a color space, and avoid converting between color spaces afterwards. Also, what does "unknown" mean in the color space options ?












Here's the culmination of some research/testing i've done, is there anything correct, or seriously wrong, in the conclusions and assumptions I've made below ?


My understanding of pixel_format is as follows : when you're recording, (even to raw), you specify the pixel format using something like "-pixel_format yuyv422", this is a "packed", not "planar" format, which is produced by the webcam. When you convert from raw to something like mkv using libx264, you can't specify a "packed" pixel format such as "yuyv422", but must instead use an appropriate planar counterpart, such as "yuv422p", which would be specified using "-pix_fmt yuv422p".


I did a raw recording of the webcam (in which I recorded a bright light, in the dark), I didn't set any of the options in the brackets above. I then converted this video using libx264 with the flags "-dst_range 1 -color_range 2" which I saw elsewhere on the internet.


Taking a screenshot of this video using vlc, and putting it through imagemagick identify -verbose, shows that the color range of the screenshot is 0,255, as for the video itself, "MediaInfo" reports "color range:Full", VLC's codec info says "Decoded format : Planar 4:2:2 YUV full scale - is this info worth anything, or is it just meta-data that the video got tagged with ?


At first I was happy about imagemagick's color range reporting, but I am thinking now, the 0, 255 range could be a result of "overshoot" values produced by the camera, which aren't actually supposed to be mapped linearly.


I appreciate that this probably feels like some school-kiddy offloading their homework assignment to avoid doing work, but I hope it can be seen that I've looked into these things prior to putting this post together.


Thanks in advance, if anyone takes the time to answer anything.


-
-
error : ‘avcodec_send_packet’ was not declared in this scope
4 juillet 2018, par StarShineThe following snippet of ffmpeg-based code is building and working on Windows VC2012, VC20155, VC2017.
With gcc on Ubuntu LTS 16.04 this is giving me issues, more specifically it does not seem to recognize avcodec_send_packet, avcodec_receive_frame and struct AVCodecParameters, and possibly more functions and structures that I’m not currently using.
error : ‘AVCodecParameters’ was not declared in this scope
error : ‘avcodec_send_packet’ was not declared in this scope
error : ‘avcodec_receive_frame ’ was not declared in this scopeThe code snippet is :
// the includes are actually in a precompiled header, included in cmake
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavdevice></libavdevice>avdevice.h>
#include <libavfilter></libavfilter>avfilter.h>
#include <libpostproc></libpostproc>postprocess.h>
#include <libswresample></libswresample>swresample.h>
#include <libswscale></libswscale>swscale.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>avutil.h>
#include <libavutil></libavutil>avassert.h>
#include <libavutil></libavutil>avstring.h>
#include <libavutil></libavutil>bprint.h>
#include <libavutil></libavutil>display.h>
#include <libavutil></libavutil>mathematics.h>
#include <libavutil></libavutil>imgutils.h>
//#include <libavutil></libavutil>libm.h>
#include <libavutil></libavutil>parseutils.h>
#include <libavutil></libavutil>pixdesc.h>
#include <libavutil></libavutil>eval.h>
#include <libavutil></libavutil>dict.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>cpu.h>
#include <libavutil></libavutil>ffversion.h>
#include <libavutil></libavutil>version.h>
}
//
...
{
if (av_read_frame(m_FormatContext, m_Packet) < 0) {
av_packet_unref(m_Packet);
m_AllPacketsSent = true;
} else {
if (m_Packet->stream_index == m_StreamIndex) {
avcodec_send_packet(m_CodecContext, m_Packet);
}
}
}
...I read up on the ffmpeg history and learned that on Debian based systems at one point they followed the fork to libavutil when that came about, and then recently some of the platforms switched back to the ffmpeg branch due to the fact that ffmpeg was much more actively supported in terms of bugfixes, features and support. As a result, some of the interfaces were possibly broken.
I’ve seen git fixes on a library called mediatombs who seem to have ecountered the same if not very similar issues with codecpar (which I initially also had and fixed the same way) :
https://github.com/gerbera/gerbera/issues/52
Here the commit seems to fix their specific issue by wrapping the codecpar field that is being renamed back to codec, which I also applied and works.
I wonder if anyone knows which functions can be used for the errors given above, since in fact these functions are themselves replacing deprecated functionality according the ffmpeg avcodec.h header comments. (https://www.ffmpeg.org/doxygen/trunk/avcodec_8h_source.html). I hope this does not mean I would have to settle back into avcodec_encode_video2() type of functions ?
Update :
For reference, it seems it has also popped up here : https://github.com/Motion-Project/motion/issues/338. The issue seems to be resolved if you can rebuild your ffmpeg stack.
Update :
To resolve the version API mingle, I ended up wiping out any ffmpeg reference and rebuilding ffmpeg from sources. This seems to push things further along in the right direction ; I have my source compiling correctly but there is still something wrong with the way I’m linking things together.
Also, I’m using CMake to set up my makefiles, and using find_package for some of the dependencies and handwritten find_path / find_library stuff for everything else. I’ve seen other people complain about the following linking issue, and a ton of case-specific replies but none of them really shed some light on what the actual problem is. My installed Ubuntu version of ALSA is 1.1.xx but still I get complaints about a 0.9 version I’m supposedly linking. Anyone knows what’s wrong with this ?
Also, my libasound.so is symbol linked into libasound.so.2.0.0 if that clears anything up. (Hope that double slashed path at the end is correct also).
/usr/bin/ld: /usr/lib/ffmpeg/libavdevice.a(alsa.o): undefined reference to symbol 'snd_pcm_hw_params_any@@ALSA_0.9' //usr/lib/x86_64-linux-gnu/libasound.so.2: