
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (48)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
Sur d’autres sites (8279)
-
FFMpeg avformat_write_header always returns < 0
7 mai 2024, par gabe.rozeWe've asked a freelancer to build a video encoder with FFMPeg for iOS but there is a bug and the freelancer is no longer available. I very inexperienced in FFMpeg and video encoding and am trying to debug this error.



From what I understand, we're attempting to create an output file and create a header for it however, avformat_write_header is always less than zero. If I comment it out, it does not work



- (BOOL) writeHeaderWithError:(NSError *__autoreleasing *)error {
 AVDictionary *options = NULL;

 // Write header for output file
 int writeHeaderValue = avformat_write_header(self.formatContext, &options);
 if (writeHeaderValue < 0) {
 if (error != NULL) {
 *error = [FFUtilities errorForAVError:writeHeaderValue];
 }
 av_dict_free(&options);
 return NO;
 }
 av_dict_free(&options);
 return YES;
}




Below is some relevant code of how we instantiate a FFOutputFile



- (AVFormatContext*) formatContextForOutputPath:(NSString*)outputPath options:(NSDictionary*)options {
 AVFormatContext *outputFormatContext = NULL;
 NSString *outputFormatString = [options objectForKey:kFFmpegOutputFormatKey];

 int openOutputValue = avformat_alloc_output_context2(&outputFormatContext, NULL, [outputFormatString UTF8String], [outputPath UTF8String]);
 if (openOutputValue < 0) {
 avformat_free_context(outputFormatContext);
 return nil;
 }
 return outputFormatContext;
}

- (void) addOutputStream:(FFOutputStream*)outputStream {
 [self.streams addObject:outputStream];
}

- (id) initWithPath:(NSString *)path options:(NSDictionary *)options {
 if (self = [super initWithPath:path options:options]) {
 self.formatContext = [self formatContextForOutputPath:path options:options];
 self.streams = [NSMutableArray array];
 self.bitstreamFilters = [NSMutableSet set];
 }
 return self;
}



-
squeeze image while capturing video with FFmpegFrameRecorder
12 avril 2016, par SatyI am trying to stream video with FFmpegFrameRecorder using javacv. All are working great except I find videos which are actually images are bit squeeze from the height.
I am using the below code which half of the internet community is using to live stream
public class MainActivity extends Activity implements OnClickListener {
private final static String LOG_TAG = "MainActivity";
private PowerManager.WakeLock mWakeLock;
private String ffmpeg_link = "rtmp://username:password@xxx.xxx.xxx.xxx:1935/live/test.flv";
//private String ffmpeg_link = "/mnt/sdcard/new_stream.flv";
private volatile FFmpegFrameRecorder recorder;
boolean recording = false;
long startTime = 0;
private int sampleAudioRateInHz = 44100;
private int imageWidth = 320;
private int imageHeight = 240;
private int frameRate = 30;
private Thread audioThread;
volatile boolean runAudioThread = true;
private AudioRecord audioRecord;
private AudioRecordRunnable audioRecordRunnable;
private CameraView cameraView;
private IplImage yuvIplimage = null;
private Button recordButton;
private LinearLayout mainLayout;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE);
setContentView(R.layout.activity_main);
initLayout();
initRecorder();
}
@Override
protected void onResume() {
super.onResume();
if (mWakeLock == null) {
PowerManager pm = (PowerManager) getSystemService(Context.POWER_SERVICE);
mWakeLock = pm.newWakeLock(PowerManager.SCREEN_BRIGHT_WAKE_LOCK, LOG_TAG);
mWakeLock.acquire();
}
}
@Override
protected void onPause() {
super.onPause();
if (mWakeLock != null) {
mWakeLock.release();
mWakeLock = null;
}
}
@Override
protected void onDestroy() {
super.onDestroy();
recording = false;
}
private void initLayout() {
mainLayout = (LinearLayout) this.findViewById(R.id.record_layout);
recordButton = (Button) findViewById(R.id.recorder_control);
recordButton.setText("Start");
recordButton.setOnClickListener(this);
cameraView = new CameraView(this);
LinearLayout.LayoutParams layoutParam = new LinearLayout.LayoutParams(imageWidth, imageHeight);
mainLayout.addView(cameraView, layoutParam);
Log.v(LOG_TAG, "added cameraView to mainLayout");
}
private void initRecorder() {
Log.w(LOG_TAG,"initRecorder");
if (yuvIplimage == null) {
// Recreated after frame size is set in surface change method
yuvIplimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 2);
//yuvIplimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_32S, 2);
Log.v(LOG_TAG, "IplImage.create");
}
recorder = new FFmpegFrameRecorder(ffmpeg_link, imageWidth, imageHeight, 1);
Log.v(LOG_TAG, "FFmpegFrameRecorder: " + ffmpeg_link + " imageWidth: " + imageWidth + " imageHeight " + imageHeight);
recorder.setFormat("flv");
Log.v(LOG_TAG, "recorder.setFormat(\"flv\")");
recorder.setSampleRate(sampleAudioRateInHz);
Log.v(LOG_TAG, "recorder.setSampleRate(sampleAudioRateInHz)");
// re-set in the surface changed method as well
recorder.setFrameRate(frameRate);
Log.v(LOG_TAG, "recorder.setFrameRate(frameRate)");
// Create audio recording thread
audioRecordRunnable = new AudioRecordRunnable();
audioThread = new Thread(audioRecordRunnable);
}
// Start the capture
public void startRecording() {
try {
recorder.start();
startTime = System.currentTimeMillis();
recording = true;
audioThread.start();
} catch (FFmpegFrameRecorder.Exception e) {
e.printStackTrace();
}
}
public void stopRecording() {
// This should stop the audio thread from running
runAudioThread = false;
if (recorder != null && recording) {
recording = false;
Log.v(LOG_TAG,"Finishing recording, calling stop and release on recorder");
try {
recorder.stop();
recorder.release();
} catch (FFmpegFrameRecorder.Exception e) {
e.printStackTrace();
}
recorder = null;
}
}
@Override
public boolean onKeyDown(int keyCode, KeyEvent event) {
// Quit when back button is pushed
if (keyCode == KeyEvent.KEYCODE_BACK) {
if (recording) {
stopRecording();
}
finish();
return true;
}
return super.onKeyDown(keyCode, event);
}
@Override
public void onClick(View v) {
if (!recording) {
startRecording();
Log.w(LOG_TAG, "Start Button Pushed");
recordButton.setText("Stop");
} else {
stopRecording();
Log.w(LOG_TAG, "Stop Button Pushed");
recordButton.setText("Start");
}
}
//---------------------------------------------
// audio thread, gets and encodes audio data
//---------------------------------------------
class AudioRecordRunnable implements Runnable {
@Override
public void run() {
// Set the thread priority
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
// Audio
int bufferSize;
short[] audioData;
int bufferReadResult;
bufferSize = AudioRecord.getMinBufferSize(sampleAudioRateInHz,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, sampleAudioRateInHz,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize);
audioData = new short[bufferSize];
Log.d(LOG_TAG, "audioRecord.startRecording()");
audioRecord.startRecording();
// Audio Capture/Encoding Loop
while (runAudioThread) {
// Read from audioRecord
bufferReadResult = audioRecord.read(audioData, 0, audioData.length);
if (bufferReadResult > 0) {
//Log.v(LOG_TAG,"audioRecord bufferReadResult: " + bufferReadResult);
// Changes in this variable may not be picked up despite it being "volatile"
if (recording) {
try {
// Write to FFmpegFrameRecorder
Buffer[] buffer = {ShortBuffer.wrap(audioData, 0, bufferReadResult)};
recorder.record(buffer);
} catch (FFmpegFrameRecorder.Exception e) {
Log.v(LOG_TAG,e.getMessage());
e.printStackTrace();
}
}
}
}
Log.v(LOG_TAG,"AudioThread Finished");
/* Capture/Encoding finished, release recorder */
if (audioRecord != null) {
audioRecord.stop();
audioRecord.release();
audioRecord = null;
Log.v(LOG_TAG,"audioRecord released");
}
}
}
class CameraView extends SurfaceView implements SurfaceHolder.Callback, PreviewCallback {
private boolean previewRunning = false;
private SurfaceHolder holder;
private Camera camera;
private byte[] previewBuffer;
long videoTimestamp = 0;
Bitmap bitmap;
Canvas canvas;
public CameraView(Context _context) {
super(_context);
holder = this.getHolder();
holder.addCallback(this);
holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
}
@Override
public void surfaceCreated(SurfaceHolder holder) {
camera = Camera.open();
try {
camera.setPreviewDisplay(holder);
camera.setPreviewCallback(this);
Camera.Parameters currentParams = camera.getParameters();
Log.v(LOG_TAG,"Preview Framerate: " + currentParams.getPreviewFrameRate());
Log.v(LOG_TAG,"Preview imageWidth: " + currentParams.getPreviewSize().width + " imageHeight: " + currentParams.getPreviewSize().height);
// Use these values
imageWidth = currentParams.getPreviewSize().width;
imageHeight = currentParams.getPreviewSize().height;
frameRate = currentParams.getPreviewFrameRate();
bitmap = Bitmap.createBitmap(imageWidth, imageHeight, Bitmap.Config.ALPHA_8);
/*
Log.v(LOG_TAG,"Creating previewBuffer size: " + imageWidth * imageHeight * ImageFormat.getBitsPerPixel(currentParams.getPreviewFormat())/8);
previewBuffer = new byte[imageWidth * imageHeight * ImageFormat.getBitsPerPixel(currentParams.getPreviewFormat())/8];
camera.addCallbackBuffer(previewBuffer);
camera.setPreviewCallbackWithBuffer(this);
*/
camera.startPreview();
previewRunning = true;
}
catch (IOException e) {
Log.v(LOG_TAG,e.getMessage());
e.printStackTrace();
}
}
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
Log.v(LOG_TAG,"Surface Changed: width " + width + " height: " + height);
// We would do this if we want to reset the camera parameters
/*
if (!recording) {
if (previewRunning){
camera.stopPreview();
}
try {
//Camera.Parameters cameraParameters = camera.getParameters();
//p.setPreviewSize(imageWidth, imageHeight);
//p.setPreviewFrameRate(frameRate);
//camera.setParameters(cameraParameters);
camera.setPreviewDisplay(holder);
camera.startPreview();
previewRunning = true;
}
catch (IOException e) {
Log.e(LOG_TAG,e.getMessage());
e.printStackTrace();
}
}
*/
// Get the current parameters
Camera.Parameters currentParams = camera.getParameters();
Log.v(LOG_TAG,"Preview Framerate: " + currentParams.getPreviewFrameRate());
Log.v(LOG_TAG,"Preview imageWidth: " + currentParams.getPreviewSize().width + " imageHeight: " + currentParams.getPreviewSize().height);
// Use these values
imageWidth = currentParams.getPreviewSize().width;
imageHeight = currentParams.getPreviewSize().height;
frameRate = currentParams.getPreviewFrameRate();
// Create the yuvIplimage if needed
yuvIplimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 2);
//yuvIplimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_32S, 2);
}
@Override
public void surfaceDestroyed(SurfaceHolder holder) {
try {
camera.setPreviewCallback(null);
previewRunning = false;
camera.release();
} catch (RuntimeException e) {
Log.v(LOG_TAG,e.getMessage());
e.printStackTrace();
}
}
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
if (yuvIplimage != null && recording) {
videoTimestamp = 1000 * (System.currentTimeMillis() - startTime);
// Put the camera preview frame right into the yuvIplimage object
yuvIplimage.getByteBuffer().put(data);
// FAQ about IplImage:
// - For custom raw processing of data, getByteBuffer() returns an NIO direct
// buffer wrapped around the memory pointed by imageData, and under Android we can
// also use that Buffer with Bitmap.copyPixelsFromBuffer() and copyPixelsToBuffer().
// - To get a BufferedImage from an IplImage, we may call getBufferedImage().
// - The createFrom() factory method can construct an IplImage from a BufferedImage.
// - There are also a few copy*() methods for BufferedImage<->IplImage data transfers.
// Let's try it..
// This works but only on transparency
// Need to find the right Bitmap and IplImage matching types
/*
bitmap.copyPixelsFromBuffer(yuvIplimage.getByteBuffer());
//bitmap.setPixel(10,10,Color.MAGENTA);
canvas = new Canvas(bitmap);
Paint paint = new Paint();
paint.setColor(Color.GREEN);
float leftx = 20;
float topy = 20;
float rightx = 50;
float bottomy = 100;
RectF rectangle = new RectF(leftx,topy,rightx,bottomy);
canvas.drawRect(rectangle, paint);
bitmap.copyPixelsToBuffer(yuvIplimage.getByteBuffer());
*/
//Log.v(LOG_TAG,"Writing Frame");
try {
// Get the correct time
recorder.setTimestamp(videoTimestamp);
// Record the image into FFmpegFrameRecorder
recorder.record(yuvIplimage);
} catch (FFmpegFrameRecorder.Exception e) {
Log.v(LOG_TAG,e.getMessage());
e.printStackTrace();
}
}
}
}}
-
x264 library speed - Altivec vs SSE2 -
25 février 2013, par Omer MerdanI have simple cheap dualcore intel-3ghz-debian and access to super-expensive powerPc7-Aix.
And after few days of strugle, i compiled libx264 and tested it on both computers :
- GCC : library x264 on intel (with SSE2 capabilities) and
- GCC on 16 core powerPc (with altivec).
... and result is that cheap intel is x2 times faster ! (with altivec disabled, intel is 10x times faster)
My question : is this normal ?
Does all other powerPC-users have same results ? Can powerPc-altivec-optimisation of x264 library work at same speed with intel... or MMX/SSE optimisation is officially at least 2 times faster for this library ?I am not interested in multi-thread options. Number of cores and threads are irrelevant. Just simple one-thread x264 encoding with default "medium preset" using rawvideo as source, sse vs altivec.
Maybe native Aix XLC compiler provide better results ? (i managed only gcc to work)
... mac-powerpc-users maybe know something about this.
powrPc7-Aix:$ time (cat raw10sec.y4m |x264 --input-res 720x576 --fps 50 -o /dev/null -)
x264: 64-bit XCOFF
x264 [info]: using cpu capabilities: Altivec
time: real 0m33.559s
---
intelDebian:$ time (cat raw10sec.y4m |x264 --input-res 720x576 --fps 50 -o /dev/null -)
x264: ELF 32-bit LSB executable
x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.1 Cache64
time: real 0m16.503s