Recherche avancée

Médias (91)

Autres articles (111)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

Sur d’autres sites (13266)

  • Generate a .JSON file in After Effects or Premiere ?

    9 avril 2021, par Ryan

    I am working on a video template app where users can pick a video template and add their images as background layer and render a .mp4 video.

    


    Each template which I upload to the server is a Zip file which consists of 4 elements
1 : Background Video
2 : Source Images
3 : Output Video
4 : .JSON file

    


    I am a motion designer and not good with code, however when I check the .JSON file, I can tell that its holding some sort of animation data, I can see the text pointing towards the source images and the background video and a bunch of other text data which I think is basically animating the source images.

    


    Now being a motion designer, I can make some really nice animation overlays and simply replace it with the BACKGROUND VIDEO element and yes it works. And I can make as many animation templates I want using this one .JSON file.

    


    However (This is where the limitation starts), even though I can have really exciting overlay animations, the base source images are just 3 images fading in and out, and this data is coming from the .JSON file.

    


    What would be really nice, is if I could some how create/generate my own .JSON files, this way I can have the source images animate however I like (Position, Scale, Rotate) but I have no clue how this .JSON can be generated.

    


    This link has the main files which make up one template. You can check the .JSON file and how it works with other elements.
One Template Files

    


  • Create Panorama from Non-Sequential Video Frames

    6 mai 2021, par M.Innat

    There is a similar question (not that detailed and no exact solution).

    



    


    I want to create a single panorama image from video frames. And for that, I need to get minimum non-sequential video frames at first. A demo video file is uploaded here.

    


    What I Need

    


    A mechanism that can produce not-only non-sequential video frames but also in such a way that can be used to create a panorama image. A sample is given below. As we can see to create a panorama image, all the input samples must contain minimum overlap regions to each other otherwise it can not be done.

    


    enter image description here

    


    So, if I have the following video frame's order

    


    A, A, A, B, B, B, B, C, C, A, A, C, C, C, B, B, B ...


    


    To create a panorama image, I need to get something as follows - reduced sequential frames (or adjacent frames) but with minimum overlapping.

    


         [overlap]  [overlap]  [overlap] [overlap]  [overlap]
 A,    A,B,       B,C,       C,A,       A,C,      C,B,  ...


    


    What I've Tried and Stuck

    


    A demo video clip is given above. To get non-sequential video frames, I primarily rely on ffmpeg software.

    


    Trial 1 Ref.

    


    ffmpeg -i check.mp4 -vf mpdecimate,setpts=N/FRAME_RATE/TB -map 0:v out.mp4


    


    After that, on the out.mp4, I applied slice the video frames using opencv

    


    import cv2, os 
from pathlib import Path

vframe_dir = Path("vid_frames/")
vframe_dir.mkdir(parents=True, exist_ok=True)

vidcap = cv2.VideoCapture('out.mp4')
success,image = vidcap.read()
count = 0

while success:
    cv2.imwrite(f"{vframe_dir}/frame%d.jpg" % count, image)     
    success,image = vidcap.read()
    count += 1


    


    Next, I rotated these saved images horizontally (as my video is a vertical view).

    


    vframe_dir = Path("out/")
vframe_dir.mkdir(parents=True, exist_ok=True)

vframe_dir_rot = Path("vframe_dir_rot/")
vframe_dir_rot.mkdir(parents=True, exist_ok=True)

for i, each_img in tqdm(enumerate(os.listdir(vframe_dir))):
    image = cv2.imread(f"{vframe_dir}/{each_img}")[:, :, ::-1] # Read (with BGRtoRGB)
    
    image = cv2.rotate(image,cv2.cv2.ROTATE_180)
    image = cv2.rotate(image,cv2.ROTATE_90_CLOCKWISE)

    cv2.imwrite(f"{vframe_dir_rot}/{each_img}", image[:, :, ::-1]) # Save (with RGBtoBGR)


    


    The output is ok for this method (with ffmpeg) but inappropriate for creating the panorama image. Because it didn't give some overlapping frames sequentially in the results. Thus panorama can't be generated.

    
 


    Trail 2 - Ref

    


    ffmpeg -i check.mp4 -vf decimate=cycle=2,setpts=N/FRAME_RATE/TB -map 0:v out.mp4


    


    didn't work at all.

    


    Trail 3

    


    ffmpeg -i check.mp4 -ss 0 -qscale 0 -f image2 -r 1 out/images%5d.png


    


    No luck either. However, I've found this last ffmpeg command was close by far but wasn't enough. Comparatively to others, this gave me a small amount of non-duplicate frames (good) but the bad thing is still do not need frames, and I kinda manually pick some desired frames, and then the opecv stitching algorithm works. So, after picking some frames and rotating (as mentioned before) :

    


    stitcher = cv2.Stitcher.create()
status, pano = stitcher.stitch(images) # images: manually picked video frames -_- 


    
 


    Update

    


    After some trials, I am kinda adopting the non-programming solution. But would love to see an efficient programmatic approach.

    


    On the given demo video, I used Adobe products (premiere pro and photoshop) to do this task, video instruction. But the issue was, I kind of took all video frames at first (without dropping to any frames and that will computationally cost further) via premier and use photoshop to stitching them (according to the youtube video instruction). It was too heavy for these editor tools and didn't look better way but the output was better than anything until now. Though I took few (400+ frames) video frames only out of 1200+.

    


    enter image description here

    



    


    Here are some big challenges. The original video clips have some conditions though, and it's too serious. Unlike the given demo video clips :

    


      

    • It's not straight forward, i.e. camera shaking
    • 


    • Lighting condition, i.e. causes different visual look at the same spot
    • 


    • Cameral flickering or banding
    • 


    


    This scenario is not included in the given demo video. And this brings additional and heavy challenges to create panorama images from such videos. Even with the non-programming way (using adobe tools) I couldn't make it any good.

    



    


    However, for now, all I'm interest to get a panorama image from the given demo video which is without the above condition. But I would love to know any comment or suggestion on that.

    


  • Recording of Full HD 60 FPS videos in C#

    17 mars 2021, par Alexander Naumov

    My application works with a high-speed camera. I am trying to record a videofile using C#.

    


    The task is pretty "simple" : to record the video from the camera. We need to record medium (higher-better) quality videos to save as many details as possible.

    


      

    • Resolution : 1920 x 1080 (FullHD)
    • 


    • Frames per second (FPS) : 60
    • 


    • Bitrate : I've started from 10000*1000 (but now I don't know)
    • 


    • Mediacontainer : MP4, AVI (does not really matter, we just need to
solve our task)
    • 


    • Codec : also does not matter, we just need speed and quality.
    • 


    • Maximum size of videofile : 10 GB/hour
    • 


    


    Framerate of the camera can be changed during the recording by the camera itself (not by user), so it's necessary to have something like timestamps for every frame or anything else.

    


    The problem is not fast enough recording.
Example : using AForge libs, generated pictures ("white" noise), duration of test videos is 20 seconds.

    


    Duration of video creating using different codecs (provided by AForge) :

    


      

    • Codec : MPEG4, Time : 33,703
    • 


    • Codec : WMV1, Time : 45,338
    • 


    • Codec : WMV2, Time : 45,530
    • 


    • Codec : MSMPEG4v2, Time : 43,775
    • 


    • Codec : MSMPEG4v3, Time : 44,390
    • 


    • Codec : H263P, Time : 38,894
    • 


    • Codec : FLV1, Time : 39,151
    • 


    • Codec : MPEG2, Time : 35,561
    • 


    • Codec : Raw, Time : 61,456
    • 


    


    Another libs we've tried is not satisfied us.
Accord.FFMPEG is slow because of strange inner exceptions.
EmguCV.FFMPEG has no timestamps, therefore it creates corrupted video.

    


    Recording the video to the SSD drive did not give us any visible acceleration.

    


    Google search gives no clear examples or modern solutions to solve this task. That's the main reason to write here.

    


    There is a code sample of our test :

    


    private static void AForge_test()
    {
        Console.WriteLine("AForge test started...");
        unsafe
        {
            Stopwatch watch = new Stopwatch();

            Console.WriteLine("FPS: {0}, W:{1}, H:{2}, T:{3}", fps, w, h, time);

            AForge.Video.FFMPEG.VideoCodec[] codecs = (AForge.Video.FFMPEG.VideoCodec[]) Enum.GetValues(typeof(AForge.Video.FFMPEG.VideoCodec));

            for(int k = 0; k < codecs.Length; k++)
            {
              /*  if (codecs[k] != VideoCodec.MPEG4)
                    continue;*/
                try
                {
                    watch.Restart();

                    Random r2 = new Random(200);
                    AForge.Video.FFMPEG.VideoFileWriter vw = new AForge.Video.FFMPEG.VideoFileWriter();
                    string name = String.Format("E:\\VideosHDD\\AForge_test_{0}_mid.avi", Enum.GetName(typeof(AForge.Video.FFMPEG.VideoCodec), codecs[k]));
                    vw.Open(name, w, h, fps, codecs[k], 10000 * 1000);

                    for (int i = 0; i < frames; i++)
                    {
                        vw.WriteVideoFrame(bmps[i%N]);
                    }

                    vw.Close();
                    vw.Dispose();

                    watch.Stop();

                    Console.WriteLine("Codec: {0}, Time: {1:F3}", Enum.GetName(typeof(AForge.Video.FFMPEG.VideoCodec), codecs[k]), watch.ElapsedMilliseconds / 1000d);
                }
                catch(Exception ex)
                {
                    Console.WriteLine("Error " + codecs[k].ToString());
                }

            }
            
        }

        Console.ReadKey();
    }


    


    Additional :

    


      

    1. We are ready to use not-for-free solutions, but free is preferable.
    2. 


    3. One of the supposed reasons for low recording speed : (x86) building of applications. I tried so hard to find x64 Aforge building but failed in this. We really don't know is there any influence of application architecture on recording speed.
    4. 


    


    I am ensured that I don't know all the background of video recording and another "little" thing, so I would be very pleased to solutions with clear explanations.