I want to make video from batch of photos, I have an ArrayList that contains all the photos paths, how can I create the video without coping all the photos into one dic? (some of the photos in DCIM, some in the screenshot folder...)
for example, first photo: storage/emulated/0/Pictures/Screenshots/Screenshot2 second photo: storage/emulated/0/DCIM/camera/photo152 thrid photo: storage/emulated/0/WhatsApp/Media/WhatsApp Images/IMG-20160919-WA0038.jpg
Another question, which commend should i run? (my output file should be mp4 with 20 fps..) (...)
I've got a video with few characteristics, and i like to transform another video with this same characteristics. I tried on my own, but failed each time... The goal is to replace one video by another in a player.
Here's the original video : $ file 01.avi 01.avi: RIFF (little-endian) data, AVI, 320 x 240, video: Motion JPEG, audio: (mono, 22050 Hz)
or with $ ffmpeg -i 01.avi Input #0, avi, from '01.avi': Duration: 00:00:00.09, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj420p(pc, bt470bg/unknown/unknown), 320x240, 2477 kb/s, 21.68 fps, 21.68 tbr, 21.68 (...)
When I try to start silent(without audio and video streams) playing of HLS (HTTP Live Streaming) by typing ffplay -an -vn -i "http://localhost:8080/video/find?startTime= 1376716800000&endTime=1376717400000
I get the next error message on my console: ffplay version N-67063-g282c935 Copyright (coffee) 2003-2014 the FFmpeg developers built on Oct 20 2014 22:10:09 with gcc 4.9.1 (GCC) configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab le-iconv --enable-libass --enable-libbluray (...)
I am new to NDK so i read tutorial and i successfully build the FFMPEG lib than i copied it into my jni folder create Android.mk and Application.mk file and execute ndk-build command so now i got libavcodec.so into my lib folder..( i didnt copy ffmpeg header files into my jni folder .. is it necessary to add header file or should i add complete ffmpeg lib into jni... stack-overflow comments say that you just have to add header files)
I know that if i want to convert my camera video into small size than i have to compress it by using avcodac.so so i compile it but important this is How (...)
I recently had few problems with FFmpeg and compiling it to get library. I managed to get through all of them, however recently I found out I need to add Speex decoder (and possibly encoder) to my project. I got Speex by sources, ./configure and make;make install (later - as I had problems - I also used Brew to download Speex). I added --enable-libspeex to my configure script and every time I try to use it I get "Speex not found using pkg-config" error.
I am sure that there's Speex files at /usr/local/include and lib directories, I also added those two as CFLAGS and LDFLAGS, I tried (...)
I'm building a video editing app and currently I am using ffmpeg for encoding and decoding process (my app can easily do the tasks where live preview is not required like reverse a video or apply filters like B/W or pick specific frames in video and some audio tasks etc) but Problem is that if user want to adjust the brightness or contrast of the video they require live preview of the video but that is not supported by ffmpeg ffmpeg requires to encode the video before the preview so my question is how can I achieve live preview with ffmpeg
Thanks in advance (sorry for grammar related (...)
I am trying to rtsp stream between two consoles on my computer.
On console 1 I have: ffmpeg -rtbufsize 100M -re -f dshow -s 320x240 -i video="BisonCam, NB Pro" -r 10 -an -f rtsp -rtsp_transport tcp rtsp://127.0.0.1:8554/demo
On console 2 I have: ffplay -rtsp_flags listen -i rtsp://127.0.0.1:8554/demo
When I run execute both commands, my webcam LED lights up. But then immediately after ffmpeg crashes. Has anyone encountered the same thing? I could really use some help here.
This is my ffmpeg configuration on a Windows 10 machine: ffmpeg version N-81391-g2a3720b Copyright (c) 2000-2016 (...)
I got a problem from one mpeg-ts video. Actually it was created by some one else and even I don't know how they are created. The problem is that, ffmpeg is taking so much time to decode all frames from the mpeg-ts video. The command I used for this operation is...
ffmpeg -i shame-run.mov -r 24/1 test/output%d.jpg
Actually my application is integrated with ffmpeg v2.1.1. and I had a code for detecting black frames in a mpeg-ts video. Here, my code is not able to detect all black frames from ffmpeg for this mpeg-ts video. So, I taken standalone ffmpeg of same version as mentioned above (...)
I want to convert a video using FFMPEG and place a watermark with multiple texts on it. Combining the commands for placing a text and a watermark is :
ffmpeg -i input_1.mp4 -i watermark_small.png -filter_complex "overlay=10:10; drawtext=enable='between(t,0,12)':fontfile=font.ttf:text='Some text' : fontcolor=black: fontsize=18: box=1: boxcolor=yellowⓐ0.5:boxborderw=5: x=(w-text_w)/1.15:y=30, drawtext=enable='between(t,14,22)':fontfile=font.ttf:text='Next text' : fontcolor=black: fontsize=18: box=1: boxcolor=yellowⓐ0.5:boxborderw=5: x=(w-text_w)/1.15:y=30" -codec:v libx264 -preset (...)