Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (71)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

Sur d’autres sites (12837)

  • Paperclip geometry ignored

    27 mars 2017, par ACIDSTEALTH

    I have a model called Snapshot, which represents a user-recorded video. I want to take the input video file and scale it to fit within a 720x720 box (ImageMagick documentation). I then want to capture some screenshots of the video to represent it in my app. These sizes are specified accordingly in my model.

    I expect the output of this to be an original video with a maximum width of 720px (assuming it was recorded in landscape mode), a large JPG image with a maximum width of 540px, etc.

    When I attach the video file and save the model, the images and video file are processed but the result is not what I expected. The video file has a resolution of 960x720 and the images are all square (540x540, 360x360, etc).

    I’m not sure if I’m doing something wrong or if this is just a bug with Paperclip. Here is my code :

    class Snapshot < ApplicationRecord
       has_attached_file :video,
           styles: {
               original: { geometry: "720x720", format: 'mp4' },
               large: { geometry: "540x540", format: 'jpg' },
               medium: { geometry: "360x360", format: 'jpg' },
               thumb: { geometry: "180x180", format: 'jpg' }
           },
           default_url: "", processors: [:transcoder]
       validates_attachment_content_type :video, content_type: /\Avideo\/.*\Z/
       validates_attachment_size :video, less_than: 100.megabytes
    end

    I have also tried adjusting the geometry to 720x720>, 540x540>, etc. When I did this, the attachments’ resolution was unchanged, which seems to go completely against my understanding of how ImageMagick geometry works. I have numerous other models with image-only attachments that do not suffer from this issue.

    Here is a snippet from my Gemfile.lock so you can see which versions I am running :

    delayed_paperclip (3.0.1)
     activejob (>= 4.2)
     paperclip (>= 3.3)
    paperclip (5.1.0)
     activemodel (>= 4.2.0)
     activesupport (>= 4.2.0)
     cocaine (~> 0.5.5)
     mime-types
     mimemagic (~> 0.3.0)
    paperclip-av-transcoder (0.6.4)
     av (~> 0.9.0)
     paperclip (>= 2.5.2)
    paperclip-optimizer (2.0.0)
     image_optim (~> 0.19)
     paperclip (>= 3.4)

    Update
    I retried this with a video recorded in portrait mode (iPhone 6 front-facing webcam) and the video file’s output size was 720x720. The images were still square as before.

  • QTableWidget and QProcess - update table based on multiple process results

    9 mars 2017, par Spencer

    I have a python program that runs through a QTableWidget and for each item it runs a QProcess (an FFMPEG process to be exact). What I’m trying to do is update the "parent" cell when the process completes. Right now there is a for loop that goes through each row and launches a process for each, and connects the finished signal of that process to a "finished" function, which updates the QTableWidget cell. I’m just having trouble properly telling the function WHICH sell to update - right now I am passing it the index of the current row (seeing as it is being spawned by the for loop) but what happens is by the time the processes start to finish it will only get the last row in the table... I’m quite new to Python and PyQt so it is possible there is some fundamental thing I have wrong here !

    I tried passing the actual QTabelWidgetItem instead of the index but I got this error : "RuntimeError : wrapped C/C++ object of type QTableWidgetItem has been deleted"

    My code, the function "finished" and line #132 are the relevant ones :

    import sys, os, re
    from PyQt4 import QtGui, QtCore

    class BatchTable(QtGui.QTableWidget):
       def __init__(self, parent):
           super(BatchTable, self).__init__(parent)
           self.setAcceptDrops(True)
           self.setColumnCount(4)
           self.setColumnWidth(1,50)
           self.hideColumn(3)
           self.horizontalHeader().setStretchLastSection(True)
           self.setHorizontalHeaderLabels(QtCore.QString("Status;Alpha;File;Full Path").split(";"))

           self.doubleClicked.connect(self.removeProject)

       def removeProject(self, myItem):
           row = myItem.row()
           self.removeRow(row)

       def dragEnterEvent(self, e):
           if e.mimeData().hasFormat('text/uri-list'):
               e.accept()
           else:
               print "nope"
               e.ignore()

       def dragMoveEvent(self, e):
           e.accept()

       def dropEvent(self, e):
           if e.mimeData().hasUrls:
               for url in e.mimeData().urls():
                   chkBoxItem = QtGui.QTableWidgetItem()
                   chkBoxItem.setFlags(QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsEnabled)
                   chkBoxItem.setCheckState(QtCore.Qt.Unchecked)

                   rowPosition = self.rowCount()
                   self.insertRow(rowPosition)
                   self.setItem(rowPosition, 0, QtGui.QTableWidgetItem("Ready"))
                   self.setItem(rowPosition, 1, chkBoxItem)
                   self.setItem(rowPosition, 2, QtGui.QTableWidgetItem(os.path.split(str(url.toLocalFile()))[1]))
                   self.setItem(rowPosition, 3, QtGui.QTableWidgetItem(url.toLocalFile()))
                   self.item(rowPosition, 0).setBackgroundColor(QtGui.QColor(80, 180, 30))

    class ffmpegBatch(QtGui.QWidget):
       def __init__(self):
           super(ffmpegBatch, self).__init__()
           self.initUI()

       def initUI(self):

           self.edit = QtGui.QTextEdit()

           cmdGroup = QtGui.QGroupBox("Commandline arguments")
           fpsLbl = QtGui.QLabel("FPS:")
           self.fpsCombo = QtGui.QComboBox()
           self.fpsCombo.addItem("29.97")
           self.fpsCombo.addItem("23.976")
           hbox1 = QtGui.QHBoxLayout()
           hbox1.addWidget(fpsLbl)
           hbox1.addWidget(self.fpsCombo)
           cmdGroup.setLayout(hbox1)

           saveGroup = QtGui.QGroupBox("Output")
           self.outputLocation = QtGui.QLineEdit()
           self.browseBtn = QtGui.QPushButton("Browse")
           saveLocationBox = QtGui.QHBoxLayout()
           # Todo: add "auto-step up two folders" button
           saveLocationBox.addWidget(self.outputLocation)
           saveLocationBox.addWidget(self.browseBtn)
           saveGroup.setLayout(saveLocationBox)

           runBtn = QtGui.QPushButton("Run Batch Transcode")

           mainBox = QtGui.QVBoxLayout()
           self.table = BatchTable(self)
           # TODO: add "copy from clipboard" feature
           mainBox.addWidget(self.table)
           mainBox.addWidget(cmdGroup)
           mainBox.addWidget(saveGroup)
           mainBox.addWidget(runBtn)
           mainBox.addWidget(self.edit)

           self.setLayout(mainBox)
           self.setGeometry(300, 300, 600, 500)
           self.setWindowTitle('FFMPEG Batch Converter')

           # triggers/events
           runBtn.clicked.connect(self.run)

       def RepresentsInt(self, s):
           try:
               int(s)
               return True
           except ValueError:
               return False

       def run(self):
           if (self.outputLocation.text() == ''):
               return
           for projIndex in range(self.table.rowCount()):
               # collect some data
               ffmpeg_app = "C:\\Program Files\\ffmpeg-20150702-git-03b2b40-win64-static\\bin\\ffmpeg"
               frameRate = self.fpsCombo.currentText()
               inputFile = self.table.model().index(projIndex,3).data().toString()
               outputPath = self.outputLocation.text()
               outputPath = outputPath.replace("/", "\\")

               # format the input for ffmpeg
               # find how the exact number range, stored as 'd'
               imageName = os.path.split(str(inputFile))[1]
               imageName, imageExt = os.path.splitext(imageName)
               length = len(imageName)
               d = 0
               while (self.RepresentsInt(imageName[length-2:length-1]) == True):
                   length = length-1
                   d = d+1
               inputPath = os.path.split(str(inputFile))[0]
               inputFile = imageName[0:length-1]
               inputFile = inputPath + "/" + inputFile + "%" + str(d+1) + "d" + imageExt
               inputFile = inputFile.replace("/", "\\")

               # format the output
               outputFile = outputPath + "\\" + imageName[0:length-2] + ".mov"


               # build the commandline
               cmd = '"' + ffmpeg_app + '"' + ' -y -r ' + frameRate + ' -i ' + '"' + inputFile + '"' + ' -vcodec dnxhd -b:v 145M -vf colormatrix=bt601:bt709 -flags +ildct ' + '"' + outputFile + '"'

               # launch the process
               proc = QtCore.QProcess(self)
               proc.finished.connect(lambda: self.finished(projIndex))
               proc.setProcessChannelMode(proc.MergedChannels)
               proc.start(cmd)
               proc.readyReadStandardOutput.connect(lambda: self.readStdOutput(proc, projIndex, 100))
               self.table.setItem(projIndex, 0, QtGui.QTableWidgetItem("Running..."))
               self.table.item(projIndex, 0).setBackgroundColor(QtGui.QColor(110, 145, 30))

       def readStdOutput(self, proc, projIndex, total):
           currentLine = QtCore.QString(proc.readAllStandardOutput())
           currentLine = str(currentLine)
           frameEnd = currentLine.find("fps", 0, 15)
           if frameEnd != -1:
               m = re.search("\d", currentLine)
               if m:
                   frame = currentLine[m.start():frameEnd]
                   percent = (float(frame)/total)*100
                   print "Percent: " + str(percent)
                   self.edit.append(str(percent))
                   self.table.setItem(projIndex, 0, QtGui.QTableWidgetItem("Encoded: " + str(percent) + "%"))

       def finished(self, projIndex):
           # TODO: This isn't totally working properly for multiple processes (seems to get confused)
           print "A process completed"
           print self.sender().readAllStandardOutput()
           if self.sender().exitStatus() == 0:
               self.table.setItem(projIndex, 0, QtGui.QTableWidgetItem("Encoded"))
               self.table.item(projIndex, 0).setBackgroundColor(QtGui.QColor(45, 145, 240))


    def main():
       app = QtGui.QApplication(sys.argv)
       ex = ffmpegBatch()
       ex.show()
       sys.exit(app.exec_())

    if __name__ == '__main__':
       main()

    (And yes I do know that my percentage update is totally wrong right now, still working on that...)

  • What is AVHWAccel, and how can I use it ?

    15 mars 2017, par shintaroid

    I want to make use of hardware acceleration for decoding an h264 encoded MP4 file.

    My computing environment :

    Hardware: MacPro (2015 model)
    Software: FFmpeg (installed by brew)

    Here is the output of FFmpeg command :

    $ffmpeg -hwaccels
    Hardware acceleration methods:
    vda
    videotoolbox

    According to this document, there are two options for my environment, that is, VDA and VideoToolBox. I tried VDA in C++ :

    Codec = avcodec_find_decoder_by_name("h264_vda");

    It kind of worked, but the output of the pixel format is UYVY422 which I have trouble to deal with (any suggestion on how to render UYVY422 in C++ ? The ideal format is yuv420p)

    So I want to try VideotoolBox, but there is no such simple thing like (it may work in the case of encoding though)

    Codec = avcodec_find_decoder_by_name("h264_videotoolbox");

    It seems I should use AVHWAccel, but what is AVHWAccel and how to use it ?

    Part of My C++ code :

    for( unsigned int i = 0; i < pFormatCtx->nb_streams; i++ ){
           if(pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO){
               pCodecCtx = pFormatCtx->streams[i]->codec;
               video_stream = pFormatCtx->streams[i];
               if( pCodecCtx->codec_id == AV_CODEC_ID_H264 ){
                   //pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
                   pCodec = avcodec_find_decoder_by_name("h264_vda");
                   break;
               }
           }
       }
       // open codec
       if( pCodec ){
           if((ret=avcodec_open2(pCodecCtx, pCodec, NULL)) < 0) {
           ....