Recherche avancée

Médias (91)

Autres articles (93)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (11334)

  • target_link_libraries in CMAKE using android studio 2.2.2

    22 novembre 2016, par fadi

    I am facing a weird issue and it’s difficult to know why because the compiler doesnt give any errors.

    I created a new project in android studio 2.2.2 with C++ support.
    I edited the .cpp file inside src/main/cpp and compiled the project to obtain (.so) file that i can use as a shared library. To this point everything works perfect.

    Here is where the problem occurs :

    I am trying to link prebuild shared libraries from ffmpeg. I have already build the libraries in .so format and all I need to do is link them to my .cpp file.

    To link the libraries, I opened the CMakeLists.txt inside android studio and told cmake to link those prebuild shared libraries using the following code :

    add_library(libavformat SHARED IMPORTED)

    set_target_properties(libavformat PROPERTIES IMPORTED_LOCATION  C:/Android  /SDK/MyProjects/ffmpeg_to_jpg/P3/app/src/main/jniLibs/libavformat-55.so)

    include_directories(src/main/cpp/include/)

    target_link_libraries(native-lib  libavformat)

    This code basically links libavformat to native-lib (which is created from my .cpp file)

    The linking process works fine, the reason is because the compiler doesn’t cry about any dependencies.

    However, my original shared library (native-lib) stops working, and by that I mean, I cannot call any functions from within it.

    If i remove the linking line

    target_link_libraries(native-lib  libavformat)

    The native-lib.so works fine and I can call any function from within that does not depend on libavformat.

    I am not sure what is going on, like I said the compiler doesnt issue any warnings or errors. it is almost like after the linking process the content of native-lib is overwritten by libavformat !!!!

    any ideas ?

  • ffmpeg's segment_atclocktime cuts at inaccurate times for audio

    3 mai 2023, par Ross Richardson

    I am using ffmpeg's segment format to save files of an AAC stream to disk in hourly segments.
The segmenting works well, but the files are segmented/cut at different times in the clock each hour using segment_atclocktime

    


    I would like each to be exactly on the hour, e.g. 12:00:00, 13:00:00 etc. Or at least, beginning after the hour and not before, e.g. 12:00:00, 13:00:01, 14:00:00 etc.

    


    I am using ffmpeg-python to process the AAC stream and send to two outputs : stdout and these segments.
Here's the code :

    


    out1 = ffmpeg.input(stream, loglevel="panic").output("pipe:",
                                                     format="s16le", 
                                                     acodec="pcm_s16le", 
                                                     ac="1", 
                                                     ar="16000")

out2 = ffmpeg.input(stream, loglevel="info").output("rec/%Y-%m-%d-%H%M%S.aac",
                                                     acodec="copy",
                                                     format="segment",
                                                     segment_time="3600",
                                                     segment_atclocktime="1",
                                                     reset_timestamps="1",
                                                     strftime="1")
            
ffmpeg.merge_outputs(out1, out2)
      .run_async(pipe_stdout=True, overwrite_output=True)


    


    Most files are produced at the desired time : 05:00:00, 06:00:00, 07:00:00, but one or two each day start at 08:59:59 (where 09:00:00 would be desired), or even 16:00:24.

    


    I understand the segment needs to begin on a audio sample so it can't be perfect to the hour, but wondering how I can make it more consistent. Ideally, each hour's recording would begin at 00:00 or later, and not begin before the hour.

    


    I have tried using min_seg_duration 3600, reset_timestamps 1
I am not sure how exactly to use segment_clocktime_wrap_duration for audio, or whether segment_time_delta applies to audio.

    


    I'd appreciate any advice or understanding of how segment_atclocktime works with audio, as much on the internet seems video-focused.

    


  • Render SharpDX Texture2D in UWP application

    10 décembre 2019, par Alex

    I’m implementing a solution for hardware-accelerated H264 decoding and rendering in the UWP application. I want to avoid copying from GPU to CPU.
    The solutions consists of 2 parts :

    1. C library that decodes the H264 stream using ffmpeg
    2. UWP/C#/SharpDX application to receive encoded data, pinvoke library and then render decoded frames.

    I receive encoded data in the C# application and send it to the C library to decode and get the pointer to the frame back using pinvoke.

    C part looks good so far. I managed to receive pointer to the decoded frame in GPU in the C library :

    // ffmpeg decoding logic
    ID3D11Texture2D* texturePointer = (ID3D11Texture2D*)context->frame->data[0];

    I managed to receive this pointer in C# code and create SharpDX texture from it.

    var texturePointer = decoder.Decode(...data...); // pinvoke
    if (texturePointer != IntPtr.Zero)
    {
       var texture = new Texture2D(texturePointer); // works just perfect
    }

    Now I need to render it on the screen. My understanding is that I can create class that extends SurfaceImageSource so I can assign it as a Source of XAML Image object.
    It can be something like this :

    public class RemoteMediaImageSource : SurfaceImageSource
    {
       public void BeginDraw(IntPtr texturePointer)
       {
           var texture = new Texture2D(texturePointer);
           // What to do to render texture in GPU to the screen?
       }
    }

    Is my assumption correct ?
    If yes, how do I exactly do the rendering part (code example would be highly appreciated) ?