Recherche avancée

Médias (0)

Mot : - Tags -/performance

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (65)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (7408)

  • Package is installed inside docker but actual command provide exception

    1er juin 2022, par user1765862

    I'm trying to use ffmpeg core package inside .net 6 dockerized project. I install ffmpeg core inside Dockerfile, reference actual package FFMpegCore inside solution but when I try to apply any of the commands from the ffmpeg core lib I'm getting error

    


    


    An error occurred trying to start process './ffmpeg' with working
directory '/var/task'. No such file or directory

    


    


    Docker build is done with no error.

    


    Dockerfile

    


    FROM public.ecr.aws/lambda/dotnet:6 AS base
....
RUN apt-get install -y ffmpeg
....
FROM base AS final
WORKDIR /var/task
COPY --from=publish /app/publish .


    


    As per ffmpeg core docs in order to use ffmpeg I need to set its binary folder, so I add ffmpeg.config.json

    


    {
  "BinaryFolder": "/var/task",
  "TemporaryFilesFolder": "/tmp"
}


    


    Actual error is being thrown when I try to execute following command

    


    


    An error occurred trying to start process './ffmpeg' with working
directory '/var/task'. No such file or directory

    


    


    This is the place where error gets triggered

    


     using FFMpegCore;&#xA; ...&#xA; public class MyController : ControllerBase&#xA;    {&#xA;        public async Task<string> Get()&#xA;        {    &#xA;             await FFMpegArguments&#xA;                   .FromPipeInput(new StreamPipeSource(myfile))&#xA;                   .OutputToPipe(new StreamPipeSink(outputStream), options => options&#xA;                      .WithVideoCodec("vp9")&#xA;                      .ForceFormat("webm"))&#xA;                      .ProcessAsynchronously();&#xA;             ...&#xA;        }    &#xA;    }&#xA;</string>

    &#xA;

    Update :&#xA;After changing BinaryFolder location to /usr/bin I'm getting following error

    &#xA;

    An error occurred trying to start process &#x27;/usr/bin/ffmpeg&#x27; with working directory &#x27;/var/task&#x27;. No such file or directory&#xA;

    &#xA;

    Update #2&#xA;This is my complete Dockerfile

    &#xA;

    FROM public.ecr.aws/lambda/dotnet:6 AS base&#xA;&#xA;FROM mcr.microsoft.com/dotnet/sdk:6.0-bullseye-slim as build&#xA;WORKDIR /src&#xA;COPY ["AWSServerless.csproj", "AWSServerless/"]&#xA;RUN dotnet restore "AWSServerless/AWSServerless.csproj"&#xA;&#xA;WORKDIR "/src/AWSServerless"&#xA;COPY . .&#xA;RUN dotnet build "AWSServerless.csproj" --configuration Release --output /app/build&#xA;&#xA;FROM build AS publish   &#xA;&#xA;RUN apt-get update \&#xA;    &amp;&amp; apt-get install -y apt-utils libgdiplus libc6-dev \&#xA;    &amp;&amp; apt-get install -y ffmpeg&#xA;&#xA;RUN dotnet publish "AWSServerless.csproj" \&#xA;            --configuration Release \ &#xA;            --runtime linux-x64 \&#xA;            --self-contained false \ &#xA;            --output /app/publish \&#xA;            -p:PublishReadyToRun=true  &#xA;&#xA;FROM base AS final&#xA;WORKDIR /var/task&#xA;&#xA;CMD ["AWSServerless::AWSServerless.LambdaEntryPoint::FunctionHandlerAsync"]&#xA;COPY --from=publish /app/publish .&#xA;

    &#xA;

  • Patent skullduggery : Tandberg rips off x264 algorithm

    25 novembre 2010, par Dark Shikari — patents, ripoffs, x264

    Update : Tandberg claims they came up with the algorithm independently : to be fair, I can actually believe this to some extent, as I think the algorithm is way too obvious to be patented. Of course, they also claim that the algorithm isn’t actually identical, since they don’t want to lose their patent application.

    I still don’t trust them, but it’s possible it’s merely bad research (and thus being unaware of prior art) as opposed to anything malicious. Furthermore, word from within their office suggests they’re quite possibly being honest : supposedly the development team does not read x264 code at all. So this might just all be very bad luck.

    Regardless, the patent is still complete tripe, and should never have been filed.

    Most importantly, stop harassing the guy whose name is on the patent (Lars) : he’s just a programmer, not the management or lawyers responsible for filing the patent. This is stupid and unnecessary. I’ve removed the original post because of this ; it can be found here for those who want to read it.

    Appendix : the details of the patent :

    I figure I’ll go over the exact correspondence between the patent and my code here.

    1. A method for calculating run and level representations of quantized transform coefficients representing pixel values included in a block of a video picture, the method comprising :

    Translation : It’s a run-level coder.

    packing, at a video processing apparatus, each quantized transform coefficients in a value interval [Max, Min] by setting all quantized transform coefficients greater than Max equal to Max, and all quantized transform coefficients less than Min equal to Min

    The quantized coefficients are clipped to a certain valid range to allow them to be packed into bytes (they start as 16-bit values).

    reordering, at the video processing apparatus, the quantized transform ID coefficients according to a predefined order depending on respective positions in the block resulting in an array C of reordered quantized transform coefficients

    This is the zigzag pattern used in H.264 (and most formats) for reordering DCT coefficients. In x264, this is done before the run-level coder ste.

    masking, at the video processing apparatus, C by generating an array M containing ones in positions corresponding to positions of C having non-zero values, and zeros in positions corresponding to positions of C having zero values

    This is creating a bitmask based on the coefficient values, the pmovmskb step.

    is generating, at the video processing apparatus, for each position containing a one in M, a run and a level representation by setting the level value equal to an occurring value in a corresponding position of C ; and setting, at the video processing apparatus, for each position containing a one in M5 the run value equal to the number of proceeding positions relative to a current position in M since a previous occurrence of one in M.

    This is the process of creating run/level values from the bitmask.

    Now into the detailed claims :

    2. The method according to Claim 1, wherein the masking further includes, creating an array C from C where positions corresponding to positions of nonzero values in C are filled with ones, and positions corresponding to positions of zero values in C are filled with zeros, and creating M from C by extracting the most significant bit from values in respective position of C and inserting the bits in corresponding positions in M.

    They’re extracting the most significant bit of the values to create a bitmask. This is exactly what the pmovmskb in my algorithm does.

    3. The method according to Claim 2, wherein the creating of the array C is executed by a C++ function PCMPGTB, and the creating of M from C is executed by a C++ function PMOVMSKB.

    And here they use pcmpgtb (they call it a C++ function for some reason, but it’s a SSE instruction) to do the clipping of the input values. This is exactly the same method I used in decimate_score. They also use pmovmskb as mentioned.

    4. The method according to Claim 1 , wherein the generating of the run and level representation further includes determining positions containing non-zero values in C by corresponding positions containing ones in M.

    5. The method according to Claim 4, wherein the determining of positions containing non-zero values in C is executed by a C++ function BSF.

    Here they iterate over the bitmask of transform coefficients using a “BSF” function to find runs, which is exactly what I did. Of course, BSF isn’t a function, it’s an x86 instruction.

    6. The method according to Claim 1 , wherein Max is 256 and Min is 0.

    This is almost surely a typo or mistake of some sort. They mean the Max should be 255, not 256 : 256 doesn’t fit in a uint8_t.

    7. The method according to Claim 1 , wherein the predefined order follows a zigzag path of transform coefficient positions in the block starting in an upper left corner heading towards a lower right corner.

    This is a description of the typical DCT zigzag pattern (like in H.264, MPEG-2, Theora, etc).

    Everything after this part is just repeating itself with the phrase “an apparatus” added in order to make the USPTO listen to them.

  • Picturebox from AForge FFMPEG empty - C#/WinForms

    1er août 2017, par Jake Delson

    I’ve done a ton of research and looked at a lot of questions here but can’t seem to find anything to help me. I should preface I’m very new to C#, Windows Forms, and SO ! I’m a 1st year CompSci student coming from C++ experimenting with my own projects for the summer. I’m trying to display a series of bitmaps from a .avi using the AForge.Video.FFMPEG video file reader.

    It seems to be finding the file, getting its’ data (console prints dimensions, framerate, and codec) and creating the picturebox, but the picturebox comes up blank/empty. I get the bitmap from the frames of a .avi :

    From AForge example code here

    Then I’m trying to display it with a picture box :

    From MS example code here as well

    And here’s my code. Essentially a combination of the two :

       public class Simple : Form
    {
       Bitmap videoFrame;

       public Simple()
       {
           try
           {
               // create instance of video reader
               VideoFileReader reader = new VideoFileReader();
               // open video file
               reader.Open(@"C:\Users\User\Desktop\ScanTest3.AVI");
               // check some of its attributes
               Console.WriteLine("width:  " + reader.Width);
               Console.WriteLine("height: " + reader.Height);
               Console.WriteLine("fps:    " + reader.FrameRate);
               Console.WriteLine("codec:  " + reader.CodecName);

               PictureBox pictureBox1 = new PictureBox();

               // read 100 video frames out of it
               for (int i = 0; i &lt; 100; i++)
               {
                   videoFrame = reader.ReadVideoFrame();

                   pictureBox1.SizeMode = PictureBoxSizeMode.StretchImage;
                   pictureBox1.ClientSize = new Size(videoFrame.Width, videoFrame.Height);
                   pictureBox1.Image = videoFrame;

                   // dispose the frame when it is no longer required
                   videoFrame.Dispose();
               }

               reader.Close();
           }

           catch
           {
               Console.WriteLine("Nope");
           }

       }
    }

    class MApplication
    {
       public static void Main()
       {
           Application.Run(new Simple());
       }
    }

    So that’s it pretty much. Just a blank picture box coming up, when it should have the first frame of the video, even though no exception caught (though I’m pretty confident I’m using the try/catch very poorly), and the console printing the correct data for the file :

    width:  720
    height: 480
    fps:    29
    codec:  dvvideo
    [swscaler @ 05E10060] Warning: data is not aligned! This can lead to a speedloss

    Though if anyone could tell me what that warning means, that would be great as well, but I’m mainly just lost as to why there’s no picture printing to the screen.

    Thanks !