Recherche avancée

Médias (1)

Mot : - Tags -/sintel

Autres articles (97)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (6089)

  • ffmpeg ... "Impossible to convert between the formats supported by the filter"

    27 novembre 2017, par hydra3333

    Using the latest ffmpeg master built as at 2017.11.26, I’m having trouble deciphering what the error messages means and, more importantly, what to do about them.

    Changing from -vf to -filter_complex did nothing (I had to try).
    The main error message seems to be

    Impossible to convert between the formats supported by the filter

    I have tried to insert "format=" and "scale" before/between/after yadif and unsharp_opencl but to no avail.

    I wonder, could it be something to do with needing hwupload/hwdownload/hwmap or is that a red herring ?

    What am I doing wrong ?

    ".\ffmpeg_3.latest_master.exe" -hide_banner -v verbose -init_hw_device opencl=ocl:1.0 -filter_hw_device ocl -i ".\test_01.mpg" -an -map_metadata -1 -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp -filter_complex "[0:v]yadif=0:0:0,scale=flags=lanczos+accurate_rnd+full_chroma_int+full_chroma_inp,unsharp_opencl=lx=3:ly=3:la=0.5:cx=3:cy=3:ca=0.5,setdar=dar=16/9" -r 25 -c:v h264_nvenc -preset slow -bf 2 -g 50 -refs 3 -rc:v vbr_hq -rc-lookahead:v 32 -cq 22 -qmin 16 -qmax 25 -coder cabac -movflags +faststart -profile:v high -level 4.1 -pixel_format yuv420p -y ".\test_01.newest.MP4"
    [AVHWDeviceContext @ 0000020e229aad40] 1.0: NVIDIA CUDA / GeForce GTX 750 Ti
    [AVHWDeviceContext @ 0000020e229aad40] DXVA2 to OpenCL mapping function found (clCreateFromDX9MediaSurfaceKHR).
    [AVHWDeviceContext @ 0000020e229aad40] DXVA2 in OpenCL acquire function found (clEnqueueAcquireDX9MediaSurfacesKHR).
    [AVHWDeviceContext @ 0000020e229aad40] DXVA2 in OpenCL release function found (clEnqueueReleaseDX9MediaSurfacesKHR).
    [AVHWDeviceContext @ 0000020e229aad40] The cl_khr_d3d11_sharing extension is required for D3D11 to OpenCL mapping.
    [AVHWDeviceContext @ 0000020e229aad40] D3D11 to OpenCL mapping not usable.
    [mpeg @ 0000020e229ae240] max_analyze_duration 5000000 reached at 5000000 microseconds st:0
    Input #0, mpeg, from '.\test_01.mpg':
     Duration: 00:06:29.96, start: 0.240000, bitrate: 2799 kb/s
       Stream #0:0[0x1e0]: Video: mpeg2video (Main), 1 reference frame, yuv420p(tv, top first, left), 720x576 [SAR 64:45 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
       Stream #0:1[0x1c0]: Audio: mp2, 48000 Hz, stereo, s16p, 256 kb/s
    [Parsed_scale_1 @ 0000020e29951a20] w:iw h:ih flags:'lanczos+accurate_rnd+full_chroma_int+full_chroma_inp' interl:0
    Stream mapping:
     Stream #0:0 (mpeg2video) -> yadif
     setdar -> Stream #0:0 (h264_nvenc)
    Press [q] to stop, [?] for help
    [Parsed_scale_1 @ 0000020e29cccda0] w:iw h:ih flags:'lanczos+accurate_rnd+full_chroma_int+full_chroma_inp' interl:0
    [graph 0 input from stream 0:0 @ 0000020e29ccc980] w:720 h:576 pixfmt:yuv420p tb:1/90000 fr:25/1 sar:64/45 sws_param:flags=2
    [auto_scaler_0 @ 0000020e29ccc580] w:iw h:ih flags:'bilinear' interl:0
    [Parsed_unsharp_opencl_2 @ 0000020e29ccc4a0] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_scale_1' and the filter 'Parsed_unsharp_opencl_2'
    Impossible to convert between the formats supported by the filter 'Parsed_scale_1' and the filter 'auto_scaler_0'
    Error reinitializing filters!
    Failed to inject frame into filter network: Function not implemented
    Error while processing the decoded data for stream #0:0
    Conversion failed!
  • Libavcodec "the procedure entry point for av_frame_alloc could not be located" error in Visual Studio 2017 C++ project

    25 novembre 2019, par Aves

    I am trying to use libavcodec from ffmpeg library in C++ with Visual Studio 2017 Community. I downloaded the latest x64 dev and shared builds from zeranoe (version 20171217), set up include directories and additional libraries in Visual Studio for x64 build, added DLL files from shared package to my PATH.

    This is my sample test code :

    extern "C" {
    #include
    }
    int main() {
       avcodec_register_all();
       AVFrame *pAvFrame = av_frame_alloc();
       av_frame_free(&pAvFrame);
       return 0;
    }

    The code compiles without problems but when I run the application I see a dialogue window with error message "the procedure entry point for av_frame_alloc could not be located in DLL" (actual message is not in English, this is the translated version).

    I tried to set Linker->Optimization->References to /OPT:NOREF as it was advised in the similar questions but it did not help.

    Dependency walker shows that av_frame_alloc is exported, "Entry Point" is not bound. A little bit strange is that av_frame_alloc is displayed in both avcodec-58.dll (as red) and avutil-56.dll (as green). Maybe the reason is that the application is trying to get this function from avcodec instead of avutil, but I’m not sure, since I did not check the source code of these libraries.

    So the question is how to set up such a simple FFMPEG-based C++ project in VS2017, where I’m wrong ?

    UPD. 1.

    Linker flags : /OUT :"C :\work\code\TestFfmpeg\x64\Release\TestFfmpeg.exe" /MANIFEST /NXCOMPAT /PDB :"C :\work\code\TestFfmpeg\x64\Release\TestFfmpeg.pdb" /DYNAMICBASE "c :\work\dev\ffmpeg-20171217-387ee1d-win64-dev\lib*.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" /DEBUG:FULL /MACHINE:X64 /OPT:NOREF /PGD :"C :\work\code\TestFfmpeg\x64\Release\TestFfmpeg.pgd" /MANIFESTUAC :"level=’asInvoker’ uiAccess=’false’" /ManifestFile :"x64\Release\TestFfmpeg.exe.intermediate.manifest" /OPT:ICF /ERRORREPORT:PROMPT /NOLOGO /TLBID:1

  • converting a "gif" to video using swift

    3 décembre 2019, par James Woodrow

    I’ve looked around and found a few things here and there, mainly that I should be using AVAssetWriter to do this but I have 0 experience with this and video editing/creation so it doesn’t help me much since I can’t seem to find anything that does something I can modify easily (or not at my level of knowledge at least) so that it works as I intend it to.

    I have an app which takes n photos every cft (capture frame time which I get from a backend server) seconds (it’s a double for obvious reasons) I then display these frames using a UIImageView and the frames change every dft (display frame time which I also get from a backend server and can be different from cft). Up until this point nothing complicated.

    now what is currently the workflow is that these frames are sent back to a server with any relevant information I want and then the server would use imagemagick to create a real gif file and ffmpeg to create a 15 seconds video using said gif.

    the issue is this makes it so that my heroku server bills aren’t as low as I would like because of the limited memory on the dynos and the time it takes to generate these videos is of about 5-10 seconds I believe (not sure but it’s longer than I’d like)

    So the idea I had was to make the app create the video since he already has all the information he needs for this, and then simply upload it with the rest of the frames and relevant data. Using bandwidth nowadays is much cheaper than buying extra processing power on a server.

    • he has n frames to loop over
    • he has a float value representing how long each frame should last dft
    • he has a gpu or at least a much better cpu than the dynos heroku have to offer

    I’ve also looked around to see if anyone made an extensive tutorial on how to use ffmpeg in swift but I still didn’t find anything at my level and I didn’t even find a tutorial per se, only some GitHub projects which were partially completed and/or without the original tutorial linked to understand the thought process.

    I would appreciate any tips/code sample/tutorials on the subject.

    I’m adding the ffmpeg command line equivalent to what I would love to be able to do (if I could use ffmpeg directly with iOS this could be nice too)

    ffmpeg -framerate 100/13 -loop 1 -i frame%02d.png -c:v libx264 -r 100/13 -pix_fmt yuv420p -t 0:15 instagram.mp4

    where basically I did 100 / (dft * 100) for the input frame rate and just output at the same fps for 15 seconds. by the way if there are any ways to optimise this command to make it run faster without losing quality I might be able to keep the current way of functioning with heroku although I would still prefer some iOS solution.