Recherche avancée

Médias (91)

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

Sur d’autres sites (13666)

  • Hardware accelerated decoding with FFmpeg falls back to software decoding

    9 février 2024, par iexav

    So I have followed the FFmpeg example for hardware accelerated decoding exactly as it is (I am referring to this example).

    


    https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/hw_decode.c#L76


    


    But I still seem to be decoding with the software decoder. When I open the task manager on windows the GPU isn't getting used. Before I make a call to av_hwframe_transfer_data() I check whether the frame is in the relevant hw_pix_fmt format and it is. Everything works, no errors, nothing, except the GPU is doing nothing. Now as an example I tried decoding a video that uses vp9 as a codec. If I specify the hardware accelerated codec I want by name it actually does work.

    


    vidCodec=avcodec_find_decoder_by_name("vp9_cuvid"); 


    


    When I do this and look at the task manager I can see that the CPU does much less work and my GPU actually does Video Decode work. Having to specify the hardware accelerated decoder for every single video I am decoding is ridiculous though.

    


    Edit : as per user4581301's answer, here are the pieces of relevant code. (It's actually in java because I am using the java FFmpeg wrapper but it's basically just making a bunch of calls to FFmpeg functions.)

    


    &#xA; ArrayList<string> deviceTypes = new ArrayList&lt;>();&#xA; int type = AV_HWDEVICE_TYPE_NONE;&#xA;                    while ((type = av_hwdevice_iterate_types(type)) != AV_HWDEVICE_TYPE_NONE) {&#xA;                        BytePointer p = av_hwdevice_get_type_name(type);&#xA;&#xA;                        deviceTypes.add(CString(p));&#xA;                    }&#xA;                    boolean flag=false;&#xA;&#xA;                    for(int j=0;j* Allocate a codec context for the decoder */&#xA;                if ((video_c = avcodec_alloc_context3(vidCodec)) == null) {&#xA;                    throw new Exception("avcodec_alloc_context3() error: Could not allocate video decoding context.");&#xA;                }&#xA;&#xA;&#xA;                /* copy the stream parameters from the muxer */&#xA;&#xA;                if ((ret = avcodec_parameters_to_context(video_c, video_st.codecpar())) &lt; 0) {&#xA;                    releaseUnsafe();&#xA;                    throw new Exception("avcodec_parameters_to_context() error " &#x2B; ret &#x2B; ": Could not copy the video stream parameters.");&#xA;                }&#xA;              &#xA;&#xA;                    video_c.get_format(AvFormatGetter.getInstance());&#xA;                    AVBufferRef hardwareDeviceContext =av_hwdevice_ctx_alloc(type);&#xA;&#xA;                    if ((ret = av_hwdevice_ctx_create(hardwareDeviceContext, type,(String) null, null, 0)) &lt; 0) {&#xA;                        System.err.println("Failed to create specified HW device. error " &#x2B; ret);&#xA;&#xA;                    }else{&#xA;                        video_c.hw_device_ctx(av_buffer_ref(hardwareDeviceContext));&#xA;&#xA;                    }&#xA;    &#xA;&#xA;&#xA;                &#xA;//The function that gets called for get_format&#xA;@Override&#xA; public int call(AVCodecContext context, IntPointer format) {&#xA;            int p;&#xA;&#xA;&#xA;            for (int i=0;;i&#x2B;&#x2B;) {&#xA;                if ((p=format.get(i)) == hw_pix_fmt) {&#xA;                    return p;&#xA;                }&#xA;                if(p==-1){&#xA;                    break;&#xA;                }&#xA;            }&#xA;&#xA;            System.out.println(hw_pix_fmt &#x2B;" is not found in the codec context");&#xA;            // Error&#xA;&#xA;            return AV_PIX_FMT_NONE;&#xA;        }&#xA;    &#xA;//The method that&#x27;s used for decoding video frames&#xA;  public Optional<boolean> decodeVideoFrame(AVPacket pkt, boolean readPacket, boolean keyFrames) throws Exception {&#xA;&#xA;        int ret;&#xA;        // Decode video frame&#xA;        if (readPacket) {&#xA;            ret = avcodec_send_packet(video_c, pkt);&#xA;          &#xA;            if (ret &lt; 0) {&#xA;                System.out.println("error during decoding");&#xA;                return Optional.empty();&#xA;            }&#xA;&#xA;            if (pkt.data() == null &amp;&amp; pkt.size() == 0) {&#xA;                pkt.stream_index(-1);&#xA;            }&#xA;           &#xA;        }&#xA;&#xA;        // Did we get a video frame?&#xA;        while (true) {&#xA;            ret = avcodec_receive_frame(video_c, picture_hw);&#xA;&#xA;            if (ret == AVERROR_EAGAIN() || ret == AVERROR_EOF()) {&#xA;                if (pkt.data() == null &amp;&amp; pkt.size() == 0) {&#xA;                    return Optional.empty();&#xA;                } else {&#xA;&#xA;                    return Optional.of(true);&#xA;&#xA;                }&#xA;            } else if (ret &lt; 0) {&#xA;&#xA;                // Ignore errors to emulate the behavior of the old API&#xA;                // throw new Exception("avcodec_receive_frame() error " &#x2B; ret &#x2B; ": Error during video decoding.");&#xA;                return Optional.of(true);&#xA;&#xA;            }&#xA;&#xA;            if (!keyFrames || picture.pict_type() == AV_PICTURE_TYPE_I) {&#xA;              &#xA;                if(picture_hw.format()==hw_pix_fmt){&#xA;                    if (av_hwframe_transfer_data(&#xA;                            picture, // The frame that will contain the usable data.&#xA;                            picture_hw, // Frame returned by avcodec_receive_frame()&#xA;                            0) &lt; 0) {&#xA;                        throw new Exception("Could not transfer data from gpu to cpu. ");&#xA;&#xA;                    }&#xA;                }&#xA;               //... The rest of the method here&#xA;                return Optional.of(false);&#xA;&#xA;            }&#xA;        }&#xA;    }&#xA;</boolean></string>

    &#xA;

  • The First Problem

    19 janvier 2011, par Multimedia Mike — HTML5

    A few years ago, The Linux Hater made the following poignant observation regarding Linux driver support :

    Drivers are only just the beginning... But for some reason y’all like to focus on the drivers. You know why lusers do that ? Because it just happens to be the problem that people notice first.

    And so it is with the HTML5 video codec debate, re-invigorated in the past week by Google’s announcement of dropping native H.264 support in their own HTML5 video tag implementation. As I read up on the fiery debate, I kept wondering why people are so obsessed with this issue. Then I remembered the Linux Hater’s post and realized that the video codec issue is simply the first problem that most people notice regarding HTML5 video.

    I appreciate that the video codec debate has prompted Niedermayer to post on his blog once more. Otherwise, I’m just munching popcorn on the sidelines, amused and mildly relieved that the various factions are vociferously attacking each other rather than that little project I help with at work.

    Getting back to the "first problem" aspect— there’s so much emphasis on the video codec ; I wonder why no one ever, ever mentions word one about an audio codec. AAC is typically the codec that pairs with H.264 in the MPEG stack. Dark Shikari once mentioned that "AAC’s licensing terms are exponentially more onerous than H.264′s. If Google didn’t want to use H.264, they would sure as hell not want to use AAC." Most people are probably using "H.264" to refer to the entire MPEG/H.264/AAC stack, even if they probably don’t understand what all of those pieces mean.

    Anyway, The Linux Hater’s driver piece continues :

    Once y’all have drivers, the fight will move to the next layer up. And like I said, it’s a lot harder at that layer.

    A few months ago, when I wanted to post the WebM output of my new VP8 encoder and thought it would be a nice touch to deliver it via a video tag, I ignored the video codec problem (just encoded a VP8/WebM file) only to immediately discover a problem at a different layer— specifically, embedding a file using a video tag triggers a full file download when the page is loaded, which is unacceptable from end user and web hosting perspectives. This is a known issue but doesn’t get as much attention, I guess because there are bigger problems to solve first (c.f. video codec issue).

    For other issues, check out the YouTube blog’s HTML5 post or Hulu’s post that also commented on HTML5. Issues such as video streaming flexibility, content protection, fullscreen video, webcam/microphone input, and numerous others are rarely mentioned in the debates. Only "video codec" is of paramount importance.

    But I’m lending too much weight to the cacophony of a largely uninformed internet debate. Realistically, I know there are many talented engineers down in the trenches working to solve at least some of these problems. To tie this in with the Linux driver example, I’m consistently stunned these days regarding how simple it is to get Linux working on a new computer— most commodity consumer hardware really does just work right out of the box. Maybe one day, we’ll wake up and find that HTML5 video has advanced to the point that it solves all of the relevant problems to make it the simple and obvious choice for delivering web video in nearly all situations.

    It won’t be this year.

  • Choppy Audio while playing Video from StreamingAssets in Unity's VideoPlayer

    8 novembre 2017, par Saad Anees

    I have been trying to load video that I recorded from AVPro Movie Capture (Free Version). The video file was in GB so I converted it using ffmpeg command -y -i RawVideo.avi -qscale 7 FinalVideo.avi and saved it to StreamingAssets. I got the desired result. Now I want to play that converted video file in video player for preview. But the problem is when video is played from URL the audio is very choppy. I played it in windows player and VLC and it was fine. The problem occurs only in Unity’s VideoPlayer.

    PreviewVideo Class :

    public class PreviewVideo : MonoBehaviour
    {

       public GameObject VideoSelection;
       public GameObject RecordingCanvas;
       public GameObject FacebookCanvas;
       public GameObject Home;
       public Sprite pauseImage;
       public Sprite playImage;
       public VideoPlayer videoPlayer;
       public GameObject EmailCanvas;

       // Use this for initialization
       void Start ()
       {

       }

       public void Referesh()
       {
           videoPlayer.gameObject.GetComponent<spriterenderer> ().sprite = Resources.Load<sprite> ("Thumbnails/" + StaticVariables.VideoToPlay);

           videoPlayer.url = Application.streamingAssetsPath + "/FinalVideo.avi";
       }

       public void PlayVideo()
       {
           if (!videoPlayer.isPlaying) {
               videoPlayer.Play ();
           }
       }

       public void Back ()
       {
           this.gameObject.SetActive (false);
           VideoSelection.SetActive (true);
       }

       public void HomeBtn ()
       {
           SceneManager.LoadScene (0);
       }

       public void SendEmailDialogue()
       {
           EmailCanvas.SetActive (true);
           this.gameObject.SetActive (false);
       }

       public void FacebookShare()
       {
           FacebookCanvas.SetActive (true);
       }
    }
    </sprite></spriterenderer>

    Refresh() is called from RecordingCanvas class :

    public class RecordingCanvas : MonoBehaviour {

       public GameObject VideoSelection;
       public GameObject PreviewVideo;
       public GameObject Home;
       public GameObject canvas;
       public RawImage rawImage;
       public GameObject videoThumbnail;
       float _seconds;
       bool canStart = false;
       public SpriteRenderer NumSprite;
       public VideoPlayer videoPlayer;
       WebCamTexture webcamTexture;
       Process process;
       void Start ()
       {
           Refresh ();
       }

       public void Refresh()
       {
           _seconds = 0;
           NumSprite.gameObject.SetActive(true);
           webcamTexture = new WebCamTexture (1280, 720);
           webcamTexture.Stop ();
           rawImage.texture = webcamTexture;
           rawImage.material.mainTexture = webcamTexture;
           webcamTexture.Play ();
           videoPlayer.loopPointReached += VideoEndReached;

           videoPlayer.gameObject.GetComponent<spriterenderer> ().sprite = Resources.Load<sprite> ("Thumbnails/" + StaticVariables.VideoToPlay);
           videoThumbnail.GetComponent<spriterenderer> ().sprite = Resources.Load<sprite> ("Thumbnails/" + StaticVariables.VideoToPlay);
           videoPlayer.clip = Resources.Load<videoclip> ("Videos/" + StaticVariables.VideoToPlay);
       }

       void Update()
       {
           _seconds += Time.deltaTime;
           print ((int)_seconds);
           if (_seconds &lt; 1) {
               NumSprite.sprite = Resources.Load<sprite> ("Numbers/3");
           }
           else if(_seconds &lt;2)
               NumSprite.sprite = Resources.Load<sprite>("Numbers/2");
           else if(_seconds &lt;3)
               NumSprite.sprite = Resources.Load<sprite>("Numbers/1");


           if (_seconds >= 3 &amp;&amp; _seconds &lt;=4 ) {
               canStart = true;
           }

           if (canStart) {
               NumSprite.gameObject.SetActive(false);
               canStart = false;
               FindObjectOfType<capturegui> ().StartCapture();
               videoPlayer.Play ();
               videoThumbnail.SetActive (false);
           }

       }

       IEnumerator StartConversion()
       {
           yield return new WaitForSeconds (1.5f);
           process = new Process();

           if (File.Exists (Application.streamingAssetsPath + "/FinalVideo.avi"))
               File.Delete(Application.streamingAssetsPath + "/FinalVideo.avi");

           process.StartInfo.WorkingDirectory = Application.streamingAssetsPath;
           process.StartInfo.FileName = Application.streamingAssetsPath + "/ffmpeg.exe";
           process.StartInfo.Arguments = " -y -i " + StaticVariables.RawVideo + ".avi " + "-qscale 7 " + StaticVariables.FinalVideo + ".avi";
           process.StartInfo.CreateNoWindow = false;
           process.EnableRaisingEvents = true;
           process.Exited += new EventHandler(Process_Exited);
           process.StartInfo.WindowStyle = ProcessWindowStyle.Hidden;
           process.Start();
           process.WaitForExit ();
           canvas.SetActive (false);
           PreviewVideo.SetActive (true);
           FindObjectOfType<previewvideo> ().Referesh ();
           File.Copy (Application.streamingAssetsPath + "/FinalVideo.avi", @"C:\xampp\htdocs\facebook\images\FinalVideo.avi", true);
           this.gameObject.SetActive (false);
       }

       void Process_Exited(object sender, EventArgs e)
       {
           process.Dispose ();
       }

       void VideoEndReached(UnityEngine.Video.VideoPlayer vp)
       {
           videoPlayer.Stop ();
           FindObjectOfType<capturegui> ().StopCapture();
           webcamTexture.Stop ();
           canvas.SetActive (true);
           StartCoroutine(StartConversion ());
       }

    }
    </capturegui></previewvideo></capturegui></sprite></sprite></sprite></videoclip></sprite></spriterenderer></sprite></spriterenderer>

    I am using Unity version 2017.1.1p4 personal edition. Windows 10 with high end PC. I am making this app for standalone PC.

    I am stuck here. Can’t proceed further. Please help me with this issue.