
Recherche avancée
Autres articles (103)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...)
Sur d’autres sites (7464)
-
'Source code does not match byte code' Android Studio
26 août 2020, par ConnoeI'm developing an Android video editing app using FFmpeg libraries in Android Studio version 4.01. When I try to debug, the debugger steps into the decompiler and flashes, 'Source code does not match byte code' across multiple steps through the decompiled code. The debugger also seems to be jumping around the decompiled code semi-randomly, for example : here, where logSlowDispatch is false but the debugger steps into the contents of the if statement anyway without checking and flashes 'Source code does not match byte code'. I've looked at alot of posts about this problem and have tried many of the suggested solutions from invalidate cache/restart to a fresh install of Android Studio to no avail. I've read that redundant or outdated gradle dependencies might have something to do with this, but removing some of these dependencies hasn't helped :


apply plugin: 'com.android.application'

android {
 compileSdkVersion 29
 defaultConfig {
 applicationId "com.example.capstoneapplication"
 minSdkVersion 21
 targetSdkVersion 29
 versionCode 1
 versionName "1.0"
 testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
 ndkVersion "21.3.6528147"
 }
 buildTypes {
 release {
 minifyEnabled false
 proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
 }
 }
}

dependencies {
 implementation fileTree(dir: 'libs', include: ['*.jar'])

 implementation 'com.android.support.constraint:constraint-layout:2.0.0'

 testImplementation 'junit:junit:4.12'
 androidTestImplementation 'com.android.support.test:runner:1.0.2'
 androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'

 implementation 'com.writingminds:FFmpegAndroid:0.3.2'

 implementation 'org.florescu.android.rangeseekbar:rangeseekbar-library:0.3.0'

 implementation 'com.intuit.sdp:sdp-android:1.0.6'
}



My code seems to be running properly and executing the ffmpeg command on the desired video between these jumps to the decompiler, but the video does not save. Here is a snippet from the java class that might be causing this :


@Override
 public boolean onOptionsItemSelected(MenuItem menuItem){
 if(menuItem.getItemId()==R.id.trim){
 final AlertDialog.Builder alertDialog = new AlertDialog.Builder(com.example.capstoneapplication.VideoTrimmer.this);

 LinearLayout linLay = new LinearLayout(com.example.capstoneapplication.VideoTrimmer.this);
 linLay.setOrientation(LinearLayout.VERTICAL);
 LinearLayout.LayoutParams layPar = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.WRAP_CONTENT);
 layPar.setMargins(50, 0, 50, 100 );
 final EditText input = new EditText(com.example.capstoneapplication.VideoTrimmer.this);
 input.setLayoutParams(layPar);
 input.setGravity(Gravity.TOP|Gravity.START);
 input.setInputType(InputType.TYPE_TEXT_FLAG_CAP_SENTENCES);
 linLay.addView(input,layPar);

 alertDialog.setMessage("Enter Video Name");
 alertDialog.setTitle("Change Video Name");
 alertDialog.setView(linLay);
 alertDialog.setNegativeButton("Cancel", new DialogInterface.OnClickListener() {
 @Override
 public void onClick(DialogInterface dialog, int which) {
 dialog.dismiss();
 }
 });
 alertDialog.setPositiveButton("Submit", new DialogInterface.OnClickListener() {
 @Override
 public void onClick(DialogInterface dialog, int which) {
 fileName = input.getText().toString();

 try {
 snipVideo(videoDurBar.getAbsoluteMinValue().intValue(), videoDurBar.getSelectedMaxValue().intValue(), fileName);
 } catch ( FFmpegNotSupportedException e) {
 e.printStackTrace();
 }


 }
 });
 alertDialog.show();
 }
 return super.onOptionsItemSelected(menuItem);
 }

 private void snipVideo( int min, int max, String fileName) throws FFmpegNotSupportedException {

 File destFolder = new File("storage/emulated/0" + "/EditingApeSnippedVideos");
 if(!destFolder.exists()){
 destFolder.mkdir();
 }
 String fileExtension = ".mp4";
 destination = new File(destFolder, fileName + fileExtension);
 inputVideoPath = getPathFromUri(getApplicationContext(),uri);


 command = new String[]{"-ss", "" + min/1000 , "-y", "-i", inputVideoPath, "-t", ""+ (max-min)/1000 ,"-vcodec", "mpeg4", "-b:v","2097152","-b:a", "48000", "-ac","2","-ar","22050", destination.getAbsolutePath()};

 //testing command
 //command = new String []{"-y", "-i", inputVideoPath, "-ss", "00:00:02" , "-to", "00:00:03", "-c", "copy", destination.getAbsolutePath()};
 final FFmpeg ff = FFmpeg.getInstance(this);
 ff.loadBinary(new FFmpegLoadBinaryResponseHandler() {

 @Override
 public void onStart() {

 Log.i("VideoTrimmer","onStart");
 }

 @Override
 public void onFinish() {
 Log.i("VideoTrimmer","onFinish");
 }

 @Override
 public void onFailure() {
 Log.i("VideoTrimmer","onFailure");
 }

 @Override
 public void onSuccess() {
 Log.i("VideoTrimmer","Success");
 try {
 ff.execute(command, new ExecuteBinaryResponseHandler());
 } catch (FFmpegCommandAlreadyRunningException e) {
 Log.i("VideoTrimmer","FFmpegAlreadyRunning Exception");

 }
 }
 });
 }



Has anyone found a solution to this debugger issue ?


-
First input link main timebase do not match the corresponding second input link xfade timebase [duplicate]
25 mars 2021, par Captain_ZarakiI am trying to concat two videos while adding transition effect. It is working fine on some videos but giving error on some.


the command i am using is -


ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0][1]xfade=transition=smoothup:duration=1:offset=1,format=yuv420p" output.mp4 



the error i am getting is -


[swscaler @ 0x558596a78800] deprecated pixel format used, make sure you did set range correctly

[Parsed_xfade_0 @ 0x558596a39ac0] First input link main timebase (1/12800) do not match the corresponding second input link xfade timebase (1/24000)

[Parsed_xfade_0 @ 0x558596a39ac0] Failed to configure output pad on Parsed_xfade_0

Error reinitializing filters!

Failed to inject frame into filter network: Invalid argument

Error while processing the decoded data for stream #1:0



-
How to pipe multiple images, being created in parallel with an index, to ffmpeg so that it can match the speed of image creation ?
23 septembre 2020, par vishwas.mittalWe've a system that spews out 4-channel
png
images frame-by-frame (we control the output format of these images as well, so we can use something else as long as it supports transparency). Right now, we're waiting for all the images and then encoding them withffmpeg
into awebm
video file withvp8
(libvpx
encoder). But we now want to pipeline these images to FFmpeg to encode into the WebM video simultaneously as the images are being spewed out so that we don't wait forffmpeg
to encode all the images afterwards.

This is the current command, in python syntax :


['/usr/bin/ffmpeg', '-hide_banner', '-y', '-loglevel', 'info', '-f', 'rawvideo', '-pix_fmt', 'bgra', '-s', '1573x900', '-framerate', '30', '-i', '-', '-i', 'audio.wav', '-c:v', 'libvpx', '-b:v', '0', '-crf', '30', '-tile-columns', '2', '-quality', 'good', '-speed', '4', '-threads', '16', '-auto-alt-ref', '0', '-g', '300000', '-map', '0:v:0', '-map', '1:a:0', '-shortest', 'video.webm']
# for ease of read:
# /usr/bin/ffmpeg -hide_banner -y -loglevel info -f rawvideo -pix_fmt bgra -s 1573x900 -framerate 30 -i - -i audio.wav -c:v libvpx -b:v 0 -crf 30 -tile-columns 2 -quality good -speed 4 -threads 16 -auto-alt-ref 0 -g 300000 -map 0:v:0 -map 1:a:0 -shortest video.webm

proc = subprocess.Popen(args, stdin=subprocess.PIPE)



Here is a sample example of passing the image to FFMPEG proc stdin as :


# wait for the next frame to get ready
for frame_path in frame_path_list:
 while not os.path.exists(frame_path):
 time.sleep(0.25)
 frame = cv2.imread(frame_path, cv2.IMREAD_UNCHANGED)
 
 # put the frame in stdin so that it gets ready
 proc.stdin.write(frame.astype(np.uint8).tobytes())



The current speed of this process is 0.135x which is a huge bottleneck for us. Earlier when we were taking input as
-pattern_type glob -i images/*.png
we were getting around 1x-1.2x for this on a single core. So, our conclusion is that we're getting bottlenecked by stdin and hence are looking for ways to pass input through multiple sources or somehow helpffmpeg
to parallelize this effort - a few options that we're thinking of :

- 

- Somehow feed it to different pipes and make ffmpeg read from them.
- Append a new image to ffmpeg without re-encoding the whole video, but we didn't find a way to do this with giving input images directly.






But we haven't been able to get either of these working, open to any other solutions as well. Will really appreciate the help on this. Thanks !