Recherche avancée

Médias (0)

Mot : - Tags -/navigation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (48)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (5695)

  • How to convert 7sec Video to Gif Android

    21 août 2014, par Donnie Ibiyemi

    Am making an app that convert shot video clips to Gif. i was wondering if there was a libary that directly converts videos to gifs on the Fly on Android.

    I’ve tried to extract all the frames of the video and encode them into a gif but am only getting the the first frame of the video. my code is below :

            public static final int GIF_DELAY_BETWEEN_FRAMES = 100;  
            public static final float SPEED_RATIO = 1.1f;

            Uri videoFileUri=Uri.parse(mOutputFile.toString());
           FFmpegMediaMetadataRetriever retriever = new FFmpegMediaMetadataRetriever();
           retriever.setDataSource(mOutputFile.getAbsolutePath());
            rev = new ArrayList<bitmap>();
           MediaPlayer mp = MediaPlayer.create(RecorderActivity.this, videoFileUri);
            int millis = mp.getDuration();
            for(int i=1000000;itest.gif");
               outStream.write(generateGIF());
               outStream.close();
            }catch(Exception e){
               e.printStackTrace();
            }


           public byte[] generateGIF() {
           ByteArrayOutputStream bos = new ByteArrayOutputStream();
           AnimatedGifEncoder encoder = new AnimatedGifEncoder();
           encoder.setDelay((int)(GIF_DELAY_BETWEEN_FRAMES * (1 / SPEED_RATIO)));
           encoder.start(bos);
           for (Bitmap bitmap : rev) {
               encoder.addFrame(bitmap);
           }
           encoder.finish();
           return bos.toByteArray();
        }
    </bitmap>

    Please help me guys...thanks

  • FFMPEG with moviepy

    5 novembre 2023, par Shenhav Mor

    I'm working on something that concatenate videos and adds some titles on through moviepy.

    &#xA;

    As I saw on the web and on my on pc moviepy works on the CPU and takes a lot of time to save(render) a movie. Is there a way to improve the speed by running the writing of moviepy on GPU ? Like using FFmpeg or something like this ?

    &#xA;

    I didn't find an answer to that on the web, so I hope that some of you can help me.&#xA;I tried using thread=4 and thread=16 but they are still very very slow and didn't change much.

    &#xA;

    My CPU is very strong (i7 10700k), but still, rendering on moviepy takes me for a compilation with a total of 8 minutes 40 seconds, which is a lot.

    &#xA;

    Any ideas ?Thanks !&#xA;the code doesnt realy matter but :

    &#xA;

    def Edit_Clips(self):&#xA;&#xA;    clips = []&#xA;&#xA;    time=0.0&#xA;    for i,filename in enumerate(os.listdir(self.path)):&#xA;        if filename.endswith(".mp4"):&#xA;            tempVideo=VideoFileClip(self.path &#x2B; "\\" &#x2B; filename)&#xA;&#xA;            txt = TextClip(txt=self.arrNames[i], font=&#x27;Amiri-regular&#x27;,&#xA;                           color=&#x27;white&#x27;, fontsize=70)&#xA;            txt_col = txt.on_color(size=(tempVideo.w &#x2B; txt.w, txt.h - 10),&#xA;                                   color=(0, 0, 0), pos=(6, &#x27;center&#x27;), col_opacity=0.6)&#xA;&#xA;            w, h = moviesize = tempVideo.size&#xA;            txt_mov = txt_col.set_pos(lambda t: (max(w / 30, int(w - 0.5 * w * t)),&#xA;                                                 max(5 * h / 6, int(100 * t))))&#xA;&#xA;            sub=txt_mov.subclip(time,time&#x2B;4)&#xA;            time = time &#x2B; tempVideo.duration&#xA;&#xA;            final=CompositeVideoClip([tempVideo,sub])&#xA;&#xA;            clips.append(final)&#xA;&#xA;    video = concatenate_videoclips(clips, method=&#x27;compose&#x27;)&#xA;    print("after")&#xA;    video.write_videofile(self.targetPath&#x2B;"\\"&#x2B;&#x27;test.mp4&#x27;,threads=16,audio_fps=44100,codec = &#x27;libx264&#x27;)&#xA;

    &#xA;

  • Cut, concatenate, and re-encode to h265 with ffmpeg

    30 mars 2022, par Make42

    I have two h264 videos that I would like to cut (each), concatenate and re-code into h265 - all with ffmpeg. How can I achieve that, considering that the following two approaches do not work ?

    &#xA;

    First approach

    &#xA;

    I tried

    &#xA;

    ffmpeg -ss 00:00:05.500 -to 00:12:06.200 -i video1.mp4 \&#xA;       -ss 00:00:10.700 -to 01:43:47.000 -i video2.mp4 \&#xA;       -filter_complex "[0:v][0:a][1:v][1:a] concat=n=2:v=1:a=1 [outv] [outa]" \&#xA;       -map "[outv]" -map "[outa]" \&#xA;       -c:v libx265 -vtag hvc1 -c:a copy \&#xA;       final.mp4&#xA;

    &#xA;

    but get the error message

    &#xA;

    &#xA;

    Streamcopy requested for output stream 0:1, which is fed from a complex filtergraph. Filtering and streamcopy cannot be used together.

    &#xA;

    &#xA;

    Second approach

    &#xA;

    Alternatively, I created the file cutpoints.txt with the content

    &#xA;

    file video1.mp4&#xA;inpoint 5.5&#xA;outpoint 726.2&#xA;file video2.mp4&#xA;inpoint 600.7&#xA;outpoint 6227.0&#xA;

    &#xA;

    and ran the command

    &#xA;

    ffmpeg -f concat -safe 0 -i cutpoints.txt -c:v libx265 -vtag hvc1 -c:a copy final.mp4&#xA;

    &#xA;

    but then the video does not start exactly at 5.5 seconds, which is not surprising since

    &#xA;

    &#xA;

    inpoint timestamp

    &#xA;

    This directive works best with intra frame codecs, because for non-intra frame ones you will usually get extra packets before the actual In point and the decoded content will most likely contain frames before In point too.

    &#xA;

    &#xA;