Recherche avancée

Médias (0)

Mot : - Tags -/logo

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (55)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Prérequis à l’installation

    31 janvier 2010, par

    Préambule
    Cet article n’a pas pour but de détailler les installations de ces logiciels mais plutôt de donner des informations sur leur configuration spécifique.
    Avant toute chose SPIPMotion tout comme MediaSPIP est fait pour tourner sur des distributions Linux de type Debian ou dérivées (Ubuntu...). Les documentations de ce site se réfèrent donc à ces distributions. Il est également possible de l’utiliser sur d’autres distributions Linux mais aucune garantie de bon fonctionnement n’est possible.
    Il (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8448)

  • QTableWidget and QProcess - update table based on multiple process results

    9 mars 2017, par Spencer

    I have a python program that runs through a QTableWidget and for each item it runs a QProcess (an FFMPEG process to be exact). What I’m trying to do is update the "parent" cell when the process completes. Right now there is a for loop that goes through each row and launches a process for each, and connects the finished signal of that process to a "finished" function, which updates the QTableWidget cell. I’m just having trouble properly telling the function WHICH sell to update - right now I am passing it the index of the current row (seeing as it is being spawned by the for loop) but what happens is by the time the processes start to finish it will only get the last row in the table... I’m quite new to Python and PyQt so it is possible there is some fundamental thing I have wrong here !

    I tried passing the actual QTabelWidgetItem instead of the index but I got this error : "RuntimeError : wrapped C/C++ object of type QTableWidgetItem has been deleted"

    My code, the function "finished" and line #132 are the relevant ones :

    import sys, os, re
    from PyQt4 import QtGui, QtCore

    class BatchTable(QtGui.QTableWidget):
       def __init__(self, parent):
           super(BatchTable, self).__init__(parent)
           self.setAcceptDrops(True)
           self.setColumnCount(4)
           self.setColumnWidth(1,50)
           self.hideColumn(3)
           self.horizontalHeader().setStretchLastSection(True)
           self.setHorizontalHeaderLabels(QtCore.QString("Status;Alpha;File;Full Path").split(";"))

           self.doubleClicked.connect(self.removeProject)

       def removeProject(self, myItem):
           row = myItem.row()
           self.removeRow(row)

       def dragEnterEvent(self, e):
           if e.mimeData().hasFormat('text/uri-list'):
               e.accept()
           else:
               print "nope"
               e.ignore()

       def dragMoveEvent(self, e):
           e.accept()

       def dropEvent(self, e):
           if e.mimeData().hasUrls:
               for url in e.mimeData().urls():
                   chkBoxItem = QtGui.QTableWidgetItem()
                   chkBoxItem.setFlags(QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsEnabled)
                   chkBoxItem.setCheckState(QtCore.Qt.Unchecked)

                   rowPosition = self.rowCount()
                   self.insertRow(rowPosition)
                   self.setItem(rowPosition, 0, QtGui.QTableWidgetItem("Ready"))
                   self.setItem(rowPosition, 1, chkBoxItem)
                   self.setItem(rowPosition, 2, QtGui.QTableWidgetItem(os.path.split(str(url.toLocalFile()))[1]))
                   self.setItem(rowPosition, 3, QtGui.QTableWidgetItem(url.toLocalFile()))
                   self.item(rowPosition, 0).setBackgroundColor(QtGui.QColor(80, 180, 30))

    class ffmpegBatch(QtGui.QWidget):
       def __init__(self):
           super(ffmpegBatch, self).__init__()
           self.initUI()

       def initUI(self):

           self.edit = QtGui.QTextEdit()

           cmdGroup = QtGui.QGroupBox("Commandline arguments")
           fpsLbl = QtGui.QLabel("FPS:")
           self.fpsCombo = QtGui.QComboBox()
           self.fpsCombo.addItem("29.97")
           self.fpsCombo.addItem("23.976")
           hbox1 = QtGui.QHBoxLayout()
           hbox1.addWidget(fpsLbl)
           hbox1.addWidget(self.fpsCombo)
           cmdGroup.setLayout(hbox1)

           saveGroup = QtGui.QGroupBox("Output")
           self.outputLocation = QtGui.QLineEdit()
           self.browseBtn = QtGui.QPushButton("Browse")
           saveLocationBox = QtGui.QHBoxLayout()
           # Todo: add "auto-step up two folders" button
           saveLocationBox.addWidget(self.outputLocation)
           saveLocationBox.addWidget(self.browseBtn)
           saveGroup.setLayout(saveLocationBox)

           runBtn = QtGui.QPushButton("Run Batch Transcode")

           mainBox = QtGui.QVBoxLayout()
           self.table = BatchTable(self)
           # TODO: add "copy from clipboard" feature
           mainBox.addWidget(self.table)
           mainBox.addWidget(cmdGroup)
           mainBox.addWidget(saveGroup)
           mainBox.addWidget(runBtn)
           mainBox.addWidget(self.edit)

           self.setLayout(mainBox)
           self.setGeometry(300, 300, 600, 500)
           self.setWindowTitle('FFMPEG Batch Converter')

           # triggers/events
           runBtn.clicked.connect(self.run)

       def RepresentsInt(self, s):
           try:
               int(s)
               return True
           except ValueError:
               return False

       def run(self):
           if (self.outputLocation.text() == ''):
               return
           for projIndex in range(self.table.rowCount()):
               # collect some data
               ffmpeg_app = "C:\\Program Files\\ffmpeg-20150702-git-03b2b40-win64-static\\bin\\ffmpeg"
               frameRate = self.fpsCombo.currentText()
               inputFile = self.table.model().index(projIndex,3).data().toString()
               outputPath = self.outputLocation.text()
               outputPath = outputPath.replace("/", "\\")

               # format the input for ffmpeg
               # find how the exact number range, stored as 'd'
               imageName = os.path.split(str(inputFile))[1]
               imageName, imageExt = os.path.splitext(imageName)
               length = len(imageName)
               d = 0
               while (self.RepresentsInt(imageName[length-2:length-1]) == True):
                   length = length-1
                   d = d+1
               inputPath = os.path.split(str(inputFile))[0]
               inputFile = imageName[0:length-1]
               inputFile = inputPath + "/" + inputFile + "%" + str(d+1) + "d" + imageExt
               inputFile = inputFile.replace("/", "\\")

               # format the output
               outputFile = outputPath + "\\" + imageName[0:length-2] + ".mov"


               # build the commandline
               cmd = '"' + ffmpeg_app + '"' + ' -y -r ' + frameRate + ' -i ' + '"' + inputFile + '"' + ' -vcodec dnxhd -b:v 145M -vf colormatrix=bt601:bt709 -flags +ildct ' + '"' + outputFile + '"'

               # launch the process
               proc = QtCore.QProcess(self)
               proc.finished.connect(lambda: self.finished(projIndex))
               proc.setProcessChannelMode(proc.MergedChannels)
               proc.start(cmd)
               proc.readyReadStandardOutput.connect(lambda: self.readStdOutput(proc, projIndex, 100))
               self.table.setItem(projIndex, 0, QtGui.QTableWidgetItem("Running..."))
               self.table.item(projIndex, 0).setBackgroundColor(QtGui.QColor(110, 145, 30))

       def readStdOutput(self, proc, projIndex, total):
           currentLine = QtCore.QString(proc.readAllStandardOutput())
           currentLine = str(currentLine)
           frameEnd = currentLine.find("fps", 0, 15)
           if frameEnd != -1:
               m = re.search("\d", currentLine)
               if m:
                   frame = currentLine[m.start():frameEnd]
                   percent = (float(frame)/total)*100
                   print "Percent: " + str(percent)
                   self.edit.append(str(percent))
                   self.table.setItem(projIndex, 0, QtGui.QTableWidgetItem("Encoded: " + str(percent) + "%"))

       def finished(self, projIndex):
           # TODO: This isn't totally working properly for multiple processes (seems to get confused)
           print "A process completed"
           print self.sender().readAllStandardOutput()
           if self.sender().exitStatus() == 0:
               self.table.setItem(projIndex, 0, QtGui.QTableWidgetItem("Encoded"))
               self.table.item(projIndex, 0).setBackgroundColor(QtGui.QColor(45, 145, 240))


    def main():
       app = QtGui.QApplication(sys.argv)
       ex = ffmpegBatch()
       ex.show()
       sys.exit(app.exec_())

    if __name__ == '__main__':
       main()

    (And yes I do know that my percentage update is totally wrong right now, still working on that...)

  • How do I properly enable ffmpeg for matplotlib.animation ?

    7 mars 2017, par spanishgum

    I have covered a lot of ground on stack so far trying to get ffmpeg going so I can make a timelapse video.

    I am on a CentOS 7 machine, running python3.7.0a0.

    python3
    >>> import numpy as np
    >>> np.__version__
    '1.12.0'
    >>> import matplotlib as mpl
    >>> mpl.__version__
    '2.0.0'
    >>> import mpl_toolkits.basemap as base
    >>> base.__version__
    '1.0.7'

    I found this github gist on installing ffmpeg. I used the chromium source, and installed without a prefix option (using the default).

    I have confirmed that ffmpeg is installed, although I don’t know anything about testing whether it works.

    which ffmpeg
    /usr/local/bin/ffmpeg

    ffmpeg -version
    ffmpeg version N-83533-gada281d Copyright (c) 2000-2017 the FFmpeg dev elopers
    built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-11
    configuration:
    libavutil      55. 47.100 / 55. 47.100
    libavcodec     57. 80.100 / 57. 80.100
    libavformat    57. 66.102 / 57. 66.102
    libavdevice    57.  2.100 / 57.  2.100
    libavfilter     6. 73.100 /  6. 73.100
    libswscale      4.  3.101 /  4.  3.101
    libswresample   2.  4.100 /  2.  4.100

    I tried to run a few sample examples I found online :

    [1] http://matplotlib.org/examples/animation/basic_example_writer.html

    [2] http://stackoverflow.com/a/23098090/3454650

    Everything works fine up until I try to save the animation file.

    [1]

    anim.save('basic_animation.mp4', writer = FFwriter, fps=30, extra_args=['-vcodec', 'libx264'])

    [2]

    im_ani.save('im.mp4', writer=writer)

    I found here that explictly setting the path to ffmpeg might be necessary so I added this to the top of the test scripts :

    plt.rcParams['animation.ffmpeg_path'] = '/usr/local/bin/ffmpeg'

    I tried a few more tweaks in the code but always get the same response, which I do not know how to begin deciphering :

    Traceback (most recent call last):
     File "testanim.py", line 27, in <module>
       writer.grab_frame()
     File "/usr/local/lib/python3.7/contextlib.py", line 100, in __exit__
       self.gen.throw(type, value, traceback)
     File "/usr/local/lib/python3.7/site-packages/matplotlib/animation.py", line 256, in saving
       self.finish()
     File "/usr/local/lib/python3.7/site-packages/matplotlib/animation.py", line 276, in finish
       self.cleanup()
     File "/usr/local/lib/python3.7/site-packages/matplotlib/animation.py", line 311, in cleanup
       out, err = self._proc.communicate()
     File "/usr/local/lib/python3.7/subprocess.py", line 836, in communicate
       stdout, stderr = self._communicate(input, endtime, timeout)
     File "/usr/local/lib/python3.7/subprocess.py", line 1474, in _communicate
       selector.register(self.stdout, selectors.EVENT_READ)
     File "/usr/local/lib/python3.7/selectors.py", line 351, in register
       key = super().register(fileobj, events, data)
     File "/usr/local/lib/python3.7/selectors.py", line 237, in register
       key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
     File "/usr/local/lib/python3.7/selectors.py", line 224, in _fileobj_lookup
       return _fileobj_to_fd(fileobj)
     File "/usr/local/lib/python3.7/selectors.py", line 39, in _fileobj_to_fd
       "{!r}".format(fileobj)) from None
    ValueError: Invalid file object: &lt;_io.BufferedReader name=6>
    </module>

    Is there something with my configuration that is malformed ? I searched google for this error for some time but never found anything relevant to animations / ffmpeg. Any help would be greatly appreciated.


    UPDATE :

    @LordNeckBeard pointed me here : https://trac.ffmpeg.org/wiki/CompilationGuide/Centos

    I ran into problems with installing the x264 encoding dependency. Some files in libavcodec/*.c (in the make output) were reporting undefined references to several functions. After a wild goose chase found this : https://mailman.videolan.org/pipermail/x264-devel/2015-February/010971.html

    To fix the x264 installation, I simply added some configure flags :

    ./configure --enable-static --enable-shared --extra-ldflags="-lswresample -llzma"

    UPDATE :

    So everything installed fine after fixing the libx264 problems. I went ahead and copied the ffmpeg binary from the ffmpeg_build folder into /usr/local/bin/ffmpeg.

    After running the script I was getting problems where ffmpeg could not find the libx264 shared object. I think I will have to recompile everything using different prefixes. My intuition tells me there are old files laying around after I have messed with everything, using some configuration that is broken.

    So I decided maybe I should just try to use NUX : http://linoxide.com/linux-how-to/install-ffmpeg-centos-7/
    I installed ffmpeg using the new rpm, but to no avail. I still was not able to run ffmpeg because of a missing shared object.

    Finally, instead of usiong files copied into my /usr/local/bin folder, I ran ffmpeg directly from the build bin directory. Turns out that this does work properly !

    So in essence, if I want to install ffmpeg system wide, I need to manually compile from sources again but using a nonlocal prefix.

  • Write audio packet to file using ffmpeg

    27 février 2017, par iamyz

    I am trying to write audio packet to file using ffmpeg. The source device sending the packet after some interval. e.g.

    First packet has a time stamp 00:00:00
    Second packet has a time stamp 00:00:00.5000000
    Third packet has a time stamp 00:00:01
    And so on...

    Means two packet per second.

    I want to encode those packets and write to a file.

    I am referring the Ffmpeg example from link Muxing.c

    While encoding and writing there is no error. But output file has only 2 sec audio duration and speed is also super fast.

    The video frames are proper according the settings.

    I think the problem is related to calculation of pts, dts and duration of packet.

    How should I calculate proper values for pts, dts and duration. Or is this problem related to other thing ?

    Code :

    void AudioWriter::WriteAudioChunk(IntPtr chunk, int lenght, TimeSpan timestamp)
    {
       int buffer_size = av_samples_get_buffer_size(NULL, outputStream->tmp_frame->channels, outputStream->tmp_frame->nb_samples,  outputStream->AudioStream->codec->sample_fmt, 0);

       uint8_t *audioData = reinterpret_cast(static_cast(chunk));
       int ret = avcodec_fill_audio_frame(outputStream->tmp_frame,outputStream->Channels, outputStream->AudioStream->codec->sample_fmt, audioData, buffer_size, 1);

       if (!ret)
          throw gcnew System::IO::IOException("A audio file was not opened yet.");

       write_audio_frame(outputStream->FormatContext, outputStream, audioData);
    }


    static int write_audio_frame(AVFormatContext *oc, AudioWriterData^ ost, uint8_t *audioData)
    {
          AVCodecContext *c;
          AVPacket pkt = { 0 };
          int ret;
          int got_packet;
          int dst_nb_samples;

          av_init_packet(&amp;pkt);
          c = ost->AudioStream->codec;

          AVFrame *frame = ost->tmp_frame;

         if (frame)
         {
             dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples, c->sample_rate, c->sample_rate, AV_ROUND_UP);
             if (dst_nb_samples != frame->nb_samples)
               throw gcnew Exception("dst_nb_samples != frame->nb_samples");

             ret = av_frame_make_writable(ost->AudioFrame);
             if (ret &lt; 0)
                throw gcnew Exception("Unable to make writable.");

             ret = swr_convert(ost->swr_ctx, ost->AudioFrame->data, dst_nb_samples, (const uint8_t **)frame->data, frame->nb_samples);
             if (ret &lt; 0)
               throw gcnew Exception("Unable to convert to destination format.");

             frame = ost->AudioFrame;

             AVRational timebase = { 1, c->sample_rate };
             frame->pts = av_rescale_q(ost->samples_count, timebase, c->time_base);
             ost->samples_count += dst_nb_samples;
         }

         ret = avcodec_encode_audio2(c, &amp;pkt, frame, &amp;got_packet);
         if (ret &lt; 0)
           throw gcnew Exception("Error encoding audio frame.");

         if (got_packet)
         {
           ret = write_frame(oc, &amp;c->time_base, ost->AudioStream, &amp;pkt);
           if (ret &lt; 0)
               throw gcnew Exception("Audio is not written.");
         }
         else
            throw gcnew Exception("Audio packet encode failed.");

         return (ost->AudioFrame || got_packet) ? 0 : 1;
    }

    static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
    {
       av_packet_rescale_ts(pkt, *time_base, st->time_base);
       pkt->stream_index = st->index;
       return av_interleaved_write_frame(fmt_ctx, pkt);
    }