
Recherche avancée
Autres articles (111)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (8531)
-
Android : Pass video path to FFmpeg
7 janvier 2016, par marianI have developed an app that play video from gallery. I would like to add watermark using FFmpeg command in the video selected. But I do not know how to pass the path to the FFmpeg command. I could not find proper tutorials or reference regarding this. My coding are as follows :
MainActivity.java :
import android.app.Activity;
import android.app.ProgressDialog;
import android.content.DialogInterface;
import android.content.Intent;
import android.net.Uri;
import android.os.Bundle;
import android.os.Handler;
import android.os.Message;
import android.os.PowerManager;
import android.util.Log;
import android.view.View;
import android.widget.Button;
import android.widget.Toast;
import android.widget.VideoView;
import com.netcompss.ffmpeg4android.CommandValidationException;
import com.netcompss.ffmpeg4android.GeneralUtils;
import com.netcompss.ffmpeg4android.Prefs;
import com.netcompss.ffmpeg4android.ProgressCalculator;
import com.netcompss.loader.LoadJNI;
public class MainActivity extends Activity {
public ProgressDialog progressBar;
String workFolder = null;
String demoVideoFolder = null;
String demoVideoPath = null;
String vkLogPath = null;
LoadJNI vk;
private final int STOP_TRANSCODING_MSG = -1;
private final int FINISHED_TRANSCODING_MSG = 0;
private boolean commandValidationFailedFlag = false;
Button button;
VideoView videoView;
private static final int PICK_FROM_GALLERY = 1;
private void runTranscodingUsingLoader() {
Log.i(Prefs.TAG, "runTranscodingUsingLoader started...");
PowerManager powerManager = (PowerManager)MainActivity.this.getSystemService(Activity.POWER_SERVICE);
PowerManager.WakeLock wakeLock = powerManager.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, "VK_LOCK");
Log.d(Prefs.TAG, "Acquire wake lock");
wakeLock.acquire();
String[] complexCommand = {"ffmpeg","-y" ,"-i", "/sdcard/videokit/in.mp4","-strict","experimental",
"-vf", "movie=/sdcard/videokit/watermark.png [watermark];" +
" [in][watermark] overlay=main_w-overlay_w-10:10 [out]","-s",
"320x240","-r", "30", "-b", "15496k", "-vcodec", "mpeg4","-ab",
"48000", "-ac", "2", "-ar", "22050", "/sdcard/videokit/out1.mp4"};
///////////////////////////////////////////////////////////////////////
vk = new LoadJNI();
try {
// running complex command with validation
vk.run(complexCommand, workFolder, getApplicationContext());
// running without command validation
//vk.run(complexCommand, workFolder, getApplicationContext(), false);
// running regular command with validation
//vk.run(GeneralUtils.utilConvertToComplex(commandStr), workFolder, getApplicationContext());
Log.i(Prefs.TAG, "vk.run finished.");
// copying vk.log (internal native log) to the videokit folder
GeneralUtils.copyFileToFolder(vkLogPath, demoVideoFolder);
} catch (CommandValidationException e) {
Log.e(Prefs.TAG, "vk run exeption.", e);
commandValidationFailedFlag = true;
} catch (Throwable e) {
Log.e(Prefs.TAG, "vk run exeption.", e);
}
finally {
if (wakeLock.isHeld()) {
wakeLock.release();
Log.i(Prefs.TAG, "Wake lock released");
}
else{
Log.i(Prefs.TAG, "Wake lock is already released, doing nothing");
}
}
// finished Toast
String rc = null;
if (commandValidationFailedFlag) {
rc = "Command Vaidation Failed";
}
else {
rc = GeneralUtils.getReturnCodeFromLog(vkLogPath);
}
final String status = rc;
MainActivity.this.runOnUiThread(new Runnable() {
public void run() {
Toast.makeText(MainActivity.this, status, Toast.LENGTH_LONG).show();
if (status.equals("Transcoding Status: Failed")) {
Toast.makeText(MainActivity.this, "Check: " + vkLogPath + " for more information.", Toast.LENGTH_LONG).show();
}
}
});
}
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
button = (Button) findViewById(R.id.button);
videoView = (VideoView) findViewById(R.id.videoview);
button.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
// TODO Auto-generated method stub
Intent intent = new Intent();
intent.setType("video/*");
intent.setAction(Intent.ACTION_GET_CONTENT);
startActivityForResult(Intent.createChooser(intent, "Complete action using"), PICK_FROM_GALLERY);
}
});
}
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
if (resultCode != RESULT_OK) return;
if (requestCode == PICK_FROM_GALLERY) {
Uri mVideoURI = data.getData();
videoView.setVideoURI(mVideoURI);
videoView.start();
demoVideoFolder = mVideoURI.getPath();
demoVideoPath = demoVideoFolder;
savevideo(mVideoURI);
}
}
private Handler handler = new Handler() {
@Override
public void handleMessage(Message msg) {
Log.i(Prefs.TAG, "Handler got message");
if (progressBar != null) {
progressBar.dismiss();
// stopping the transcoding native
if (msg.what == STOP_TRANSCODING_MSG) {
Log.i(Prefs.TAG, "Got cancel message, calling fexit");
vk.fExit(getApplicationContext());
}
}
}
};
public void runTranscoding() {
progressBar = new ProgressDialog(MainActivity.this);
progressBar.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL);
progressBar.setTitle("FFmpeg4Android Direct JNI");
progressBar.setMessage("Press the cancel button to end the operation");
progressBar.setMax(100);
progressBar.setProgress(0);
progressBar.setCancelable(false);
progressBar.setButton(DialogInterface.BUTTON_NEGATIVE, "Cancel", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
handler.sendEmptyMessage(STOP_TRANSCODING_MSG);
}
});
progressBar.show();
new Thread() {
public void run() {
Log.d(Prefs.TAG,"Worker started");
try {
//sleep(5000);
runTranscodingUsingLoader();
handler.sendEmptyMessage(FINISHED_TRANSCODING_MSG);
} catch(Exception e) {
Log.e("threadmessage",e.getMessage());
}
}
}.start();
// Progress update thread
new Thread() {
ProgressCalculator pc = new ProgressCalculator(vkLogPath);
public void run() {
Log.d(Prefs.TAG,"Progress update started");
int progress = -1;
try {
while (true) {
sleep(300);
progress = pc.calcProgress();
if (progress != 0 && progress < 100) {
progressBar.setProgress(progress);
}
else if (progress == 100) {
Log.i(Prefs.TAG, "==== progress is 100, exiting Progress update thread");
pc.initCalcParamsForNextInter();
break;
}
}
} catch(Exception e) {
Log.e("threadmessage",e.getMessage());
}
}
}.start();
}
public void savevideo (Uri mVideoURI){
demoVideoFolder = mVideoURI.getPath();
demoVideoPath = demoVideoFolder;
Log.i(Prefs.TAG, getString(R.string.app_name) + " version: " + GeneralUtils.getVersionName(getApplicationContext()));
Button invoke = (Button) findViewById(R.id.button);
invoke.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
Log.i(Prefs.TAG, "run clicked.");
runTranscoding();
}
});
workFolder = getApplicationContext().getFilesDir() + "/";
Log.i(Prefs.TAG, "workFolder (license and logs location) path: " + workFolder);
vkLogPath = workFolder + "vk.log";
Log.i(Prefs.TAG, "vk log (native log) path: " + vkLogPath);
GeneralUtils.copyLicenseFromAssetsToSDIfNeeded(this, workFolder);
GeneralUtils.copyDemoVideoFromAssetsToSDIfNeeded(this, demoVideoFolder);
int rc = GeneralUtils.isLicenseValid(getApplicationContext(), workFolder);
Log.i(Prefs.TAG, "License check RC: " + rc);
}
}ffmpeg command :
String[] complexCommand = {"ffmpeg","-y" ,"-i", "/sdcard/videokit/in.mp4","-strict","experimental",
"-vf", "movie=/sdcard/videokit/watermark.png [watermark];" +
" [in][watermark] overlay=main_w-overlay_w-10:10 [out]","-s",
"320x240","-r", "30", "-b", "15496k", "-vcodec", "mpeg4","-ab",
"48000", "-ac", "2", "-ar", "22050", "/sdcard/videokit/out1.mp4"};Tis command is from a sample project. How do i pass the video path to this command ? I do not know how to edit the command to support my requirement. Can someone guide me through this. Any help will be really helpful. Thank you.
-
Ffmpeg lose streams while using -map 0
27 mars 2016, par NgoralI had a strange issue using ffmpeg on Ubuntu 14.04.
I run a commandffmpeg -i output2.avi -c:v h264 -minrate 2000k -maxrate 5000k -bufsize 2000k -profile:v high -level:v 4 -coder 1 -s 640x360 -bf 0 -pix_fmt yuv420p -r 25 -g 25 -c:a aac -ar 48k -b:a 321k -map 0 -y outpu.mp4
It provides such a usual output in console (already with -loglevel verbose) :
ffmpeg version N-79004-g2e6636a Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04)
configuration: --prefix=/home/ngoral/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/ngoral/ffmpeg_build/include --extra-ldflags=-L/home/ngoral/ffmpeg_build/lib --bindir=/home/ngoral/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-nonfree
libavutil 55. 19.100 / 55. 19.100
libavcodec 57. 28.101 / 57. 28.101
libavformat 57. 28.101 / 57. 28.101
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 39.102 / 6. 39.102
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
[avi @ 0x2707800] parser not found for codec dvvideo, packets or times may be invalid.
Last message repeated 1 times
Input #0, avi, from 'output2.avi':
Metadata:
encoder : Lavf57.28.101
Duration: 00:00:20.04, start: 0.000000, bitrate: 28911 kb/s
Stream #0:0: Video: dvvideo, 1 reference frame (dvsd / 0x64737664), yuv420p, 720x576 [SAR 16:15 DAR 4:3], 28684 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc
Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 48000 Hz, stereo, s16p, 192 kb/s
Stream #0:2: Audio: mp3 (U[0][0][0] / 0x0055), 48000 Hz, stereo, s16p, 64 kb/s
Stream #0:3: Audio: aac ([255][0][0][0] / 0x00FF), 48000 Hz, stereo, fltp, 117 kb/s
Matched encoder 'libx264' for codec 'h264'.
[graph 0 input from stream 0:0 @ 0x2784f60] w:720 h:576 pixfmt:yuv420p tb:1/25 fr:25/1 sar:16/15 sws_param:flags=2
[scaler for output stream 0:0 @ 0x2749d20] w:640 h:360 flags:'bicubic' interl:0
[scaler for output stream 0:0 @ 0x2749d20] w:720 h:576 fmt:yuv420p sar:16/15 -> w:640 h:360 fmt:yuv420p sar:3/4 flags:0x4
[graph 1 input from stream 0:1 @ 0x27a4fc0] tb:1/48000 samplefmt:s16p samplerate:48000 chlayout:0x3
[audio format for output stream 0:1 @ 0x27a5380] auto-inserting filter 'auto-inserted resampler 0' between the filter 'Parsed_anull_0' and the filter 'audio format for output stream 0:1'
[auto-inserted resampler 0 @ 0x27a7ae0] ch:2 chl:stereo fmt:s16p r:48000Hz -> ch:2 chl:stereo fmt:fltp r:48000Hz
[graph 2 input from stream 0:2 @ 0x27a6620] tb:1/48000 samplefmt:s16p samplerate:48000 chlayout:0x3
[audio format for output stream 0:2 @ 0x27a6440] auto-inserting filter 'auto-inserted resampler 0' between the filter 'Parsed_anull_0' and the filter 'audio format for output stream 0:2'
[auto-inserted resampler 0 @ 0x27b6be0] ch:2 chl:stereo fmt:s16p r:48000Hz -> ch:2 chl:stereo fmt:fltp r:48000Hz
[graph 3 input from stream 0:3 @ 0x27b6560] tb:1/48000 samplefmt:fltp samplerate:48000 chlayout:0x3
[libx264 @ 0x27889a0] using SAR=3/4
[libx264 @ 0x27889a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 @ 0x27889a0] profile High, level 4.0
[libx264 @ 0x27889a0] 264 - core 148 r2643 5c65704 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=2 keyint=25 keyint_min=2 scenecut=40 intra_refresh=0 rc_lookahead=25 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=5000 vbv_bufsize=2000 crf_max=0.0 nal_hrd=none filler=0 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'outpu.mp4':
Metadata:
encoder : Lavf57.28.101
Stream #0:0: Video: h264 (libx264), -1 reference frame ([33][0][0][0] / 0x0021), yuv420p, 640x360 [SAR 3:4 DAR 4:3], q=-1--1, max. 5000 kb/s, 25 fps, 12800 tbn, 25 tbc
Metadata:
encoder : Lavc57.28.101 libx264
Side data:
cpb: bitrate max/min/avg: 5000000/0/0 buffer size: 2000000 vbv_delay: -1
Stream #0:1: Audio: aac (LC) ([64][0][0][0] / 0x0040), 48000 Hz, stereo, fltp, 321 kb/s
Metadata:
encoder : Lavc57.28.101 aac
Stream #0:2: Audio: aac (LC) ([64][0][0][0] / 0x0040), 48000 Hz, stereo, fltp, 321 kb/s
Metadata:
encoder : Lavc57.28.101 aac
Stream #0:3: Audio: aac (LC) ([64][0][0][0] / 0x0040), 48000 Hz, stereo, fltp, 321 kb/s
Metadata:
encoder : Lavc57.28.101 aac
Stream mapping:
Stream #0:0 -> #0:0 (dvvideo (native) -> h264 (libx264))
Stream #0:1 -> #0:1 (mp3 (native) -> aac (native))
Stream #0:2 -> #0:2 (mp3 (native) -> aac (native))
Stream #0:3 -> #0:3 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
*** 3 dup!
No more output streams to write to, finishing.e=00:00:19.84 bitrate= 359.9kbits/s dup=3 drop=0 speed=1.15x
frame= 501 fps= 29 q=-1.0 Lsize= 1792kB time=00:00:20.05 bitrate= 732.2kbits/s dup=3 drop=0 speed=1.14x
video:440kB audio:1331kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.155376%
Input file #0 (output2.avi):
Input stream #0:0 (video): 498 packets read (71712000 bytes); 498 frames decoded;
Input stream #0:1 (audio): 834 packets read (480384 bytes); 834 frames decoded (960768 samples);
Input stream #0:2 (audio): 835 packets read (160320 bytes); 835 frames decoded (961920 samples);
Input stream #0:3 (audio): 0 packets read (0 bytes); 0 frames decoded (0 samples);
Total: 2167 packets (72352704 bytes) demuxed
Output file #0 (outpu.mp4):
Output stream #0:0 (video): 501 frames encoded; 501 packets muxed (451055 bytes);
Output stream #0:1 (audio): 939 frames encoded (960768 samples); 940 packets muxed (724261 bytes);
Output stream #0:2 (audio): 940 frames encoded (961920 samples); 941 packets muxed (639072 bytes);
Output stream #0:3 (audio): 0 frames encoded (0 samples); 0 packets muxed (0 bytes);
Total: 2382 packets (1814388 bytes) muxed
[libx264 @ 0x27889a0] frame I:21 Avg QP:15.30 size: 8718
[libx264 @ 0x27889a0] frame P:480 Avg QP:24.52 size: 557
[libx264 @ 0x27889a0] mb I I16..4: 20.4% 55.5% 24.1%
[libx264 @ 0x27889a0] mb P I16..4: 0.0% 0.1% 0.0% P16..4: 7.6% 3.7% 1.7% 0.0% 0.0% skip:86.8%
[libx264 @ 0x27889a0] 8x8 transform intra:56.3% inter:50.3%
[libx264 @ 0x27889a0] coded y,uvDC,uvAC intra: 42.0% 39.5% 27.5% inter: 2.6% 1.4% 0.0%
[libx264 @ 0x27889a0] i16 v,h,dc,p: 36% 52% 3% 10%
[libx264 @ 0x27889a0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 56% 20% 14% 2% 1% 1% 2% 2% 2%
[libx264 @ 0x27889a0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 42% 26% 7% 4% 3% 4% 5% 5% 4%
[libx264 @ 0x27889a0] i8c dc,h,v,p: 66% 13% 17% 4%
[libx264 @ 0x27889a0] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x27889a0] ref P L0: 68.1% 11.5% 13.9% 6.5%
[libx264 @ 0x27889a0] kb/s:179.78
[aac @ 0x2747da0] Qavg: 62719.090
[aac @ 0x2748b20] Qavg: 64509.496
[aac @ 0x27498a0] Qavg: -nanit seems like it outputs all 3 audiostreams, but then i do
ffmpeg -loglevel verbose -i outpu.mp4
And get only 2 audiostreams :
ffmpeg version N-79004-g2e6636a Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04)
configuration: --prefix=/home/ngoral/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/ngoral/ffmpeg_build/include --extra-ldflags=-L/home/ngoral/ffmpeg_build/lib --bindir=/home/ngoral/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-nonfree
libavutil 55. 19.100 / 55. 19.100
libavcodec 57. 28.101 / 57. 28.101
libavformat 57. 28.101 / 57. 28.101
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 39.102 / 6. 39.102
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'outpu.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.28.101
Duration: 00:00:20.06, start: 0.021333, bitrate: 731 kb/s
Stream #0:0(und): Video: h264 (High), 3 reference frames (avc1 / 0x31637661), yuv420p, 640x360 (640x368) [SAR 3:4 DAR 4:3], 180 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 289 kb/s (default)
Metadata:
handler_name : SoundHandler
Stream #0:2(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 254 kb/s
Metadata:
handler_name : SoundHandlerWhat’s wrong with it ?
It works fine on my Win machine, on virtual machine with Ubuntu on it, but as ran on real Ubuntu it behaves like this. Do you have any ideas ?
Thanks ! -
ffmpeg - scaling and stacking 2 videos ?
25 mai 2016, par Gambit2007I have 2 inputs and i want to scale, crop and put them on top of each other at the same time. My command should look something like this :
ffmpeg -i input1 -i input2 -filter_complex crop=10000:5000:1000:0,scale=3840:1536 vstack output.mp4
I know i need to use chaining (?) but i tried to look it up online and couldn’t really get it to work.
So what would be the correct syntax the scale and crop both inputs and then put them vertically on top of each other while using ’-filter_complex’ only once ?
Thanks !