
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (62)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (9337)
-
FFMPEG FFBROBE Get Frame Count On Powershell [duplicate]
27 avril 2020, par ilham zackyI am new to FFmpeg, I have some audio and video files, I need to get the duration with frames, I need it in this format H:M:S:F - "00:00:00:00", I am using Powershell



currently, my duration works, but instead of frames, it prints a decimal value.
note : in my output maximum number of frames should be 30



This is my code



$audioId = "$id.m4a"
$videoId = "$id.mp4"

$duration1 = if ((ffprobe -i $audioId 2>&1 | Out-String) -match 'Duration:\s+([\d:"."]+)') { $matches[1] };

$duration = if ((ffmpeg -i $videoId 2>&1 | Out-String) -match 'Duration:\s+([\d:"."]+)') { $matches[1] };

$newduration1 = ("$duration1").Replace(".",":")
$newduration = ("$duration").Replace(".",":")

echo $duration1 
echo $duration





my output be like



00:00:03.48
00:00:03.46




-
Setting/Installing up OpenCV 2.4.6.1+ on Ubuntu 12.04.02
30 mai 2016, par DamilolaI had previously used OpenCV 2.4.5 with some certain configs and packages on Ubuntu 12.04.1 but had issues upgrading to OpenCV 2.4.6.1 on Ubuntu 12.04.2
I would like to share some ideas (a compilation of noteworthy information gathered from several sources including SO, ubuntu.org, asklinux.org and many other ; and of course by trying several procedures)
Below is what eventually got me through.
NOTE : ensure you uninstall any previous installation of OpenCV, FFMpeg and other dependencies previously installed.
STEP 1 (install ffmpeg and dependencies)
# goto http://sourceforge.net/projects/opencvlibrary/files/opencv-unix/
# download the latest stable opencv such as 2.4.6.1 (http://sourceforge.net/projects/opencvlibrary/files/opencv-unix/2.4.5/opencv-2.4.5.1.tar.gz/download) to current directory (such as home or ~/Document)
# cd /opt
# tar -xvf /OpenCV-2.4.6.1.tar.gz
# cd OpenCV-2.4.6.1
# create a foler under current dir (following previous step, this should be ), called prepare
# cd prepare
# Copy the following script to gedit and save as install.sh to current dir, this should be /prepare
# Check corresponding url used in the script for latest versions of the package and replace as required
# Open terminal and navigate to location used above
# sudo chmod +x install.sh
# ./installecho "Removing any pre-installed ffmpeg, x264, and other dependencies (not all the previously installed dependecies)"
sudo apt-get remove ffmpeg x264 libx264-dev libvpx-dev librtmp0 librtmp-dev libopencv-dev
sudo apt-get update
arch=$(uname -m)
if [ "$arch" == "i686" -o "$arch" == "i386" -o "$arch" == "i486" -o "$arch" == "i586" ]; then
flag=0
else
flag=1
fi
echo "Installing Dependenices"
sudo apt-get install autoconf automake make g++ curl cmake bzip2 python unzip \
build-essential checkinstall git git-core libass-dev libgpac-dev \
libsdl1.2-dev libtheora-dev libtool libva-dev libvdpau-dev libvorbis-dev libx11-dev \
libxext-dev libxfixes-dev pkg-config texi2html zlib1g-dev
echo "downloading yasm (assembler used by x264 and FFmpeg)"
# use git or tarball (not both)
wget http://www.tortall.net/projects/yasm/releases/yasm-1.2.0.tar.gz
tar xzvf yasm-1.2.0.tar.gz
cd yasm-1.2.0
echo "installing yasm"
./configure
make
sudo make install
cd ..
echo 'READ NOTE BELOW which was extracted from http://wiki.serviio.org/doku.php?id=build_ffmpeg_linux'
echo 'New version of x264 contains by default support of OpenCL. If not installed or without sense (example Ubuntu 12.04LTS on VMWare) add to configure additional option --disable-opencl. Without this option ffmpeg could not be configured (ERROR: libx264 not found).'
echo "downloading x264 (H.264 video encoder)"
# use git or tarball (not both)
# git clone http://repo.or.cz/r/x264.git or
git clone git://git.videolan.org/x264.git
cd x264
# wget ftp://ftp.videolan.org/pub/videolan/x264/snapshots/x264-snapshot-20130801-2245-stable.tar.bz2
# tar -xvjf x264-snapshot-20130801-2245-stable.tar.bz2
# cd x264-snapshot-20130801-2245-stable/
echo "Installing x264"
if [ $flag -eq 0 ]; then
./configure --enable-static --disable-opencl
else
./configure --enable-shared --enable-pic --disable-opencl
fi
make
sudo make install
cd ..
echo "downloading fdk-aac (AAC audio encoder)"
# use git or tarball (not both)
git clone --depth 1 git://github.com/mstorsjo/fdk-aac.git
cd fdk-aac
echo "installing fdk-aac"
autoreconf -fiv
./configure --disable-shared
make
sudo make install
cd ..
echo "installing libmp3lame-dev (MP3 audio encoder.)"
sudo apt-get install libmp3lame-dev
echo "downloading libopus (Opus audio decoder and encoder.)"
wget http://downloads.xiph.org/releases/opus/opus-1.0.3.tar.gz
tar xzvf opus-1.0.3.tar.gz
cd opus-1.0.3
echo "installing libopus"
./configure --disable-shared
make
sudo make install
cd ..
echo "downloading libvpx VP8/VP9 video encoder and decoder)"
# use git or tarball (not both)
git clone --depth 1 http://git.chromium.org/webm/libvpx.git
cd libvpx
# wget http://webm.googlecode.com/files/libvpx-v1.1.0.tar.bz2 (this seems not to be update, but can still be used if the fedoraproject link below is not available))
# wget http://pkgs.fedoraproject.org/repo/pkgs/libvpx/libvpx-v1.2.0.tar.bz2/400d7c940c5f9d394893d42ae5f463e6/libvpx-v1.2.0.tar.bz2
# tar xvjf libvpx-v1.2.0.tar.bz2
# cd libvpx-v1.2.0
echo "installing libvpx"
./configure --disable-examples
make
sudo make install
cd ..
sudo ldconfig
echo "downloading ffmpeg"
# git clone http://repo.or.cz/r/ffmpeg.git
git clone git://source.ffmpeg.org/ffmpeg.git
cd ffmpeg/
# wget http://ffmpeg.org/releases/ffmpeg-2.0.tar.bz2
# tar -xvjf ffmpeg-2.0.tar.bz2
# cd ffmpeg-2.0/
echo "installing ffmpeg"
if [ $flag -eq 0 ]; then
./configure --enable-gpl --enable-libass --enable-libfdk-aac --enable-libopus --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-version3 --enable-x11grab --enable-libvpx
else
./configure --enable-gpl --enable-libass --enable-libfdk-aac --enable-libopus --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-version3 --enable-x11grab --enable-libvpx --enable-shared
fi
make
sudo make install
hash -r
cd .. # move up one level to prepare folder
cd .. # move up one level to opencv folder
echo "Checking to see if you're using your new ffmpeg"
ffmpeg 2>&1 | head -n1
sudo ldconfigSTEP 2 (Install OpenCV and necessary packages)
echo "Installing Dependenices"
sudo apt-get install libtiff4-dev libjpeg-dev libjasper-dev
echo "installing Video I/O libraries, support for Firewire video cameras and video streaming libraries"
sudo apt-get install libav-tools libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev libxine-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev v4l-utils v4l-conf
echo "installing the Python development environment and the Python Numerical library"
sudo apt-get install python-dev python-numpy
echo "installing the parallel code processing library (the Intel tbb library)"
sudo apt-get install libtbb-dev
echo "installing the Qt dev library"
sudo apt-get install libqt4-dev libgtk2.0-dev
echo "installing other dependencies (if need be it would upgrade current version of the packages)"
sudo apt-get install patch subversion ruby librtmp0 librtmp-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libvpx-dev libxvidcore-dev
echo installing optional packages"
sudo apt-get install libdc1394-utils libdc1394-22-dev libdc1394-22 libjpeg-dev libpng-dev libtiff-dev libjasper-devSTEP 3 (run ldconfig)
# Open a new terminal window
# Open /etc/ld.so.conf and check,
# if the paths "/usr/lib" and "/usr/local/lib" including the quote exist in the file. If not, add them manually or by
sudo echo "/usr/local/lib" >> /etc/ld.so.conf
sudo echo "/usr/lib" >> /etc/ld.so.conf
# execute the following
sudo ldconfigSTEP 4a (Build & Install for OS Usage)
# still ensure you haven't close the new terminal window open in STEP 3
# execute the following
mkdir os_build
cd os_build
cmake -DCMAKE_BUILD_TYPE=RELEASE -DCMAKE_INSTALL_PREFIX=/usr/local -DBUILD_NEW_PYTHON_SUPPORT=ON -DINSTALL_PYTHON_EXAMPLES=ON -DWITH_TBB=ON -DWITH_V4L=ON -DINSTALL_C_EXAMPLES=ON -DBUILD_EXAMPLES=ON -DWITH_QT=ON -DWITH_OPENGL=ON -DWITH_OPENCL=ON -DWITH_EIGEN=ON -DWITH_OPENEXR=ON ..
make
sudo make install
# add the following to user environment variable ~/.bashrc
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib
export PKG_CONFIG_PATH=${PKG_CONFIG_PATH}:/usr/local/lib/pkgconfig
# execute the following
sudo ldconfig
# start to use and enjoy opencv, it should have been install into any of these locations
# /usr/local/include/opencv2, /usr/local/include/opencv, /usr/include/opencv, /usr/include/opencv2, /usr/local/share/opencv
# /usr/local/share/OpenCV, /usr/share/opencv, /usr/share/OpenCV, /usr/local/bin/opencv*, /usr/local/lib/libopencv*STEP 4b (Build for Java Usage) : OPTIONAL
# still ensure you haven't close the new terminal window open in STEP 4
# execute the following
cd ..
mkdir java_build
cd java_build
cmake -DCMAKE_BUILD_TYPE=RELEASE -DBUILD_SHARED_LIBS=OFF -DINSTALL_PYTHON_EXAMPLES=ON -DWITH_TBB=ON -DWITH_V4L=ON -DINSTALL_C_EXAMPLES=ON -DBUILD_EXAMPLES=ON -DWITH_QT=ON -DWITH_OPENGL=ON -DWITH_OPENCL=ON -DWITH_EIGEN=ON -DWITH_OPENEXR=ON ..
make
# You can check the "java_build/bin" directory to locate the jar and libopencv_java.so file for your development
# As stated in the docs, the Java bindings dynamic library is all-sufficient, i.e. doesn’t depend on other OpenCV libs, but includes all the OpenCV code insideSTEP 5 (install v4l : Note : installing v4l-utils after opencv installation works for Ubuntu 12.04.2 & OpenCV 2.4.6.1)
# still ensure you haven't close the new terminal window open in STEP 3
# goto http://www.linuxtv.org/downloads/v4l-utils
# download the latest v4l such as v4l-utils-0.9.5.tar.bz2
# copy the downloaded file to the current terminal dir (following previous step, this should be /prepare)
# execute the following
tar -xvjf v4l-utils-0.9.5.tar.bz2
cd v4l-utils-0.9.5/
./configure
make
sudo make install
cd ..
cd .. # (to go to )
sudo ldconfigWorth Noting
# To check the path where opencv & other lib files are stored, do:
pkg-config --cflags opencv
(output will come as)
-I/usr/include/opencv
pkg-config --libs opencv
(output will come as)
-lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_ --ml -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_objdetect -lopencv_contrib -lopencv_legacy -lopencv_flann
# The above paths are needed to compile your opencv programs, as given in the next illustration.
# write a simple C program to test, by saving below program in a file named DisplayImage.c
#include
#include <opencv2></opencv2>highgui/highgui.hpp>
int main(int argc, char *argv[]) {
IplImage* img=0; /* pointer to an image */
printf("Hello\n");
if(argv[1] != 0)
img = cvLoadImage(argv[1], 0); // 1 for color
else
printf("Enter filename\n");
if(img != 0) {
cvNamedWindow("Display", CV_WINDOW_AUTOSIZE); // create a window
cvShowImage("Display", img); // show image in window
cvWaitKey(0); // wait until user hits a key
cvDestroyWindow("Display");
}
else
printf("File not found\n");
return 0;
}
# write a simple C++ program to test, by saving below program in a file named DisplayImage.cpp
#include
#include <opencv2></opencv2>opencv.hpp>
#include <opencv2></opencv2>highgui/highgui.hpp>
using namespace cv;
int main( int argc, char** argv )
{
Mat image;
image = imread( argv[1], 1 );
if( argc != 2 || !image.data )
{
printf( "No image data \n" );
return -1;
}
namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
imshow( "Display Image", image );
waitKey(0);
return 0;
}
# To compile & run :
g++ `pkg-config --cflags --libs opencv` && ./a.out img
or
g++ -I/usr/include/opencv -I/usr/local/include -lopencv_core -lopencv_highgui -lopencv_ml -lopencv_imgproc -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_objdetect -lopencv_contrib -lopencv_legacy -lopencv_flann -lopencv_nonfree && ./a.out img
where “img” is the name of any image with extension within the same folder .
You should be able to see “Hello” and the image in a different window.
If this runs, Congrats! now you can run any C/C++ program with opencv lib.
# Now lets simplify the above big command by making a shortcut for it:
go to your local home directory(cd /home/) and open the .bashrc file using gedit(the file will be hidden). Append the following to the file:
alias gcv="g++ -I/usr/include/opencv -I/usr/local/include -lopencv_core -lopencv_highgui -lopencv_ml -lopencv_imgproc -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_objdetect -lopencv_contrib -lopencv_legacy -lopencv_flann -lopencv_nonfree"
and save. Close the terminal and open it again.(as this process requires relogin of the terminal)
# Now, go to directory containing a sample program & do
gcv && ./a.out
or
gcv
./a.out input_img.jpgAs you can see the commands now become similar to $cc filename.c, $./a.out which are used normally for compiling and executing C/C++ programs.
Some ways to check whether all lib files are installed-
apt-cache search opencv
returns :
libcv-dev - Translation package for libcv-dev
libcv2.3 - computer vision library - libcv* translation package
libcvaux-dev - Translation package for libcvaux-dev
libcvaux2.3 - computer vision library - libcvaux translation package
libhighgui-dev - Translation package for libhighgui-dev
libhighgui2.3 - computer vision library - libhighgui translation package
libopencv-calib3d-dev - development files for libopencv-calib3d
libopencv-calib3d2.3 - computer vision Camera Calibration library
libopencv-contrib-dev - development files for libopencv-contrib
libopencv-contrib2.3 - computer vision contrib library
libopencv-core-dev - development files for libopencv-core
libopencv-core2.3 - computer vision core library
libopencv-dev - development files for opencv
libopencv-features2d-dev - development files for libopencv-features2d
libopencv-features2d2.3 - computer vision Feature Detection and Descriptor Extraction library
libopencv-flann-dev - development files for libopencv-flann
libopencv-flann2.3 - computer vision Clustering and Search in Multi-Dimensional spaces library
libopencv-gpu-dev - development files for libopencv-gpu
libopencv-gpu2.3 - computer vision GPU Processing library
libopencv-highgui-dev - development files for libopencv-highgui
libopencv-highgui2.3 - computer vision High-level GUI and Media I/O library
libopencv-imgproc-dev - development files for libopencv-imgproc
libopencv-imgproc2.3 - computer vision Image Processing library
libopencv-legacy-dev - development files for libopencv-legacy
libopencv-legacy2.3 - computer vision legacy library
libopencv-ml-dev - development files for libopencv-ml
libopencv-ml2.3 - computer vision Machine Learning library
libopencv-objdetect-dev - development files for libopencv-objdetect
libopencv-objdetect2.3 - computer vision Object Detection library
libopencv-video-dev - development files for libopencv-video
libopencv-video2.3 - computer vision Video analysis library
opencv-doc - OpenCV documentation and examples
python-opencv - Python bindings for the computer vision library -
Transcoding Modern Formats
17 août 2014I’ve noticed that this blog still gets a decent amount of traffic, particularly to some of the older articles about transcoding. Since I’ve been working on a tool in this space recently, I thought I’d write something up in case it helps folks unravel how to think about transcoding these days.
The tool I’ve been working on is EditReady, a transcoding app for the Mac. But why do you want to transcode in the first place ?
Dailies
After a day of shooting, there are a lot of people who need to see the footage from the day. Most of these folks aren’t equipped with editing suites or viewing stations - they want to view footage on their desktop or mobile device. That can be a problem if you’re shooting ProRes or similar.
Converting ProRes, DNxHD or MPEG2 footage with EditReady to H.264 is fast and easy. With bulk metadata editing and custom file naming, the management of all the files from the set becomes simpler and more trackable.
One common workflow would be to drop all the footage from a given shot into EditReady. Use the "set metadata for all" command to attach a consistent reel name to all of the clips. Do some quick spot-checks on the footage using the built in player to make sure it’s what you expect. Use the filename builder to tag all the footage with the reel name and the file creation date. Then, select the H.264 preset and hit convert. Now anyone who needs the footage can easily take the proxies with them on the go, without needing special codecs or players, and regardless of whether they’re working on a PC, a Mac, or even a mobile device.
If your production is being shot in the Log space, you can use the LUT feature in EditReady to give your viewers a more traditional "video levels" daily. Just load a basic Log to Video Levels LUT for the batch, and your converted files will more closely resemble graded footage.
Mezzanine Formats
Even though many modern post production tools can work natively with H.264 from a GoPro or iPhone, there are a variety of downsides to that type of workflow. First and foremost is performance. When you’re working with H.264 in an editor or color correction tool, your computer has to constantly work to decompress the H.264 footage. Those are CPU cycles that aren’t being spent generating effects, responding to user interface clicks, or drawing your previews. Even apps that endeavor to support H.264 natively often get bogged down, or have trouble with all of the "flavors" of H.264 that are in use. For example, mixing and matching H.264 from a GoPro with H.264 from a mobile phone often leads to hiccups or instability.
By using EditReady to batch transcode all of your footage to a format like ProRes or DNxHD, you get great performance throughout your post production pipeline, and more importantly, you get consistent performance. Since you’ll generally be exporting these formats from other parts of your pipeline as well - getting ProRes effects shots for example - you don’t have to worry about mix-and-match problems cropping up late in the production process either.
Just like with dailies, the ability to apply bulk or custom metadata to your footage during your initial ingest also makes management easier for the rest of your production. It also makes your final output faster - transcoding from H.264 to another format is generally slower than transcoding from a mezzanine format. Nothing takes the fun out of finishing a project like watching an "exporting" bar endlessly creep along.
Modernization
The video industry has gone through a lot of digital formats over the last 20 years. As Mac OS X has been upgraded over the years, it’s gotten harder to play some of those old formats. There’s a lot of irreplaceable footage stored in formats like Sorensen Video, Apple Intermediate Codec, or Apple Animation. It’s important that this footage be moved to a modern format like ProRes or H.264 before it becomes totally unplayable by modern computers. Because EditReady contains a robust, flexible backend with legacy support, you can bring this footage in, select a modern format, and click convert. Back when I started this blog, we were mostly talking about DV and HDV, with a bit of Apple Intermediate Codec mixed in. If you’ve still got footage like that around, it’s time to bring it forward !
Output
Finally, the powerful H.264 transcoding pipeline in EditReady means you generate beautiful deliverable H.264 more rapidly than ever. Just drop in your final, edited ProRes, DNxHD, or even uncompressed footage and generate a high quality H.264 for delivery. It’s never been this easy !
See for yourself
We released a free trial of EditReady so you can give it a shot yourself. Or drop me a line if you have questions.