早在去年九月份时,写过一篇《手把手图文并茂教你用Android Studio编译FFmpeg库并移植》(没看,可点链接看看),今天用去年编译好的3.1.3的ffmpeg,进行在Android平台上解码直播流。看下Agenda:
环境
Java代码
ndk代码
解码运行
环境:
Mac OX
Android Studio 2.2
android-ndk-r10e
FFmpeg 3.1.3
Android Studio + NDK的环境配置,由于很简单,这里就不再脑补了。
建立一个工程
写Java代码:
NativePlayer:
生成头文件:javah -d jni com.hejunlin.ffmpegdecoder.NativePlayer
拷贝ffmpeg3.1.3中android目录下的include文件,共5个子文件夹,以及对应的5个so,到prebuilt下,include,prebuilt这个可以自己建立一个folder。然后写make文件:
LOCAL_PATH := $(call my-dir)include $(CLEAR_VARS)LOCAL_MODULE := avcodecLOCAL_SRC_FILES := prebuilt/libavcodec-57.soinclude $(PREBUILT_SHARED_LIBRARY)include $(CLEAR_VARS)LOCAL_MODULE := avformatLOCAL_SRC_FILES := prebuilt/libavformat-57.soinclude $(PREBUILT_SHARED_LIBRARY)include $(CLEAR_VARS)LOCAL_MODULE := avutilLOCAL_SRC_FILES := prebuilt/libavutil-55.soinclude $(PREBUILT_SHARED_LIBRARY)include $(CLEAR_VARS)LOCAL_MODULE := swresampleLOCAL_SRC_FILES := prebuilt/libswresample-2.soinclude $(PREBUILT_SHARED_LIBRARY)include $(CLEAR_VARS)LOCAL_MODULE := swscaleLOCAL_SRC_FILES := prebuilt/libswscale-4.soinclude $(PREBUILT_SHARED_LIBRARY)include $(CLEAR_VARS)LOCAL_SRC_FILES := yuiopffmpeg.cLOCAL_LDLIBS += -llog -lz -landroidLOCAL_MODULE := yuiopffmpegLOCAL_C_INCLUDES += $(LOCAL_PATH)/includeLOCAL_SHARED_LIBRARIES:= avcodec avformat avutil swresample swscaleinclude $(BUILD_SHARED_LIBRARY)
同时写下Application.mk,这个主要是用于编译不同平台的库,对应内容如下,由于引用了native_window.h,要加上一句APP_PLATFORM := android-10 :
APP_ABI := armeabi armeabi-v7a x86
APP_PLATFORM := android-10
接下来,最主要就是写jni相关代码,yuiopffmpeg.c这个文件,之前生成的那个头文件,拷贝里面那个方法名,贴到这里来
//// Created by 何俊林 on 17/3/1.//#include "libavcodec/avcodec.h"#include "libavformat/avformat.h"#include "libswscale/swscale.h"#include <android/native_window.h>#include <android/native_window_jni.h>#include "com_hejunlin_ffmpegdecoder_NativePlayer.h"#include "log.h"JNIEXPORT jint JNICALL Java_com_hejunlin_ffmpegdecoder_NativePlayer_playVideo (JNIEnv * env, jclass clazz, jstring url, jobject surface){ LOGD("start playvideo... url"); // 视频url,这里有点小问题,外部传入的url,在这里转char *,有些问题,用了网上方法,jstring转char *也不靠谱,先mark下。 char * file_name = "央视源xxx"; av_register_all(); AVFormatContext * pFormatCtx = avformat_alloc_context(); // Open video file if(avformat_open_input(&pFormatCtx, file_name, NULL, NULL)!=0) { LOGE("Couldn't open file:%s\n", file_name); return -1; // Couldn't open file } // Retrieve stream information if(avformat_find_stream_info(pFormatCtx, NULL)<0) { LOGE("Couldn't find stream information."); return -1; } // Find the first video stream int videoStream = -1, i; for (i = 0; i < pFormatCtx->nb_streams; i++) { if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO && videoStream < 0) { videoStream = i; } } if(videoStream==-1) { LOGE("Didn't find a video stream."); return -1; // Didn't find a video stream } // Get a pointer to the codec context for the video stream AVCodecContext * pCodecCtx = pFormatCtx->streams[videoStream]->codec; // Find the decoder for the video stream AVCodec * pCodec = avcodec_find_decoder(pCodecCtx->codec_id); if(pCodec==NULL) { LOGE("Codec not found."); return -1; // Codec not found } if(avcodec_open2(pCodecCtx, pCodec, NULL) < 0) { LOGE("Could not open codec."); return -1; // Could not open codec } // 获取native window ANativeWindow* nativeWindow = ANativeWindow_fromSurface(env, surface); // 获取视频宽高 int videoWidth = pCodecCtx->width; int videoHeight = pCodecCtx->height; // 设置native window的buffer大小,可自动拉伸 ANativeWindow_setBuffersGeometry(nativeWindow, videoWidth, videoHeight, WINDOW_FORMAT_RGBA_8888); ANativeWindow_Buffer windowBuffer; if(avcodec_open2(pCodecCtx, pCodec, NULL)<0) { LOGE("Could not open codec."); return -1; // Could not open codec } // Allocate video frame AVFrame * pFrame = av_frame_alloc(); // 用于渲染 AVFrame * pFrameRGBA = av_frame_alloc(); if(pFrameRGBA == NULL || pFrame == NULL) { LOGE("Could not allocate video frame."); return -1; } // Determine required buffer size and allocate buffer // buffer中数据就是用于渲染的,且格式为RGBA int numBytes=av_image_get_buffer_size(AV_PIX_FMT_RGBA, pCodecCtx->width, pCodecCtx->height, 1); uint8_t * buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t)); av_image_fill_arrays(pFrameRGBA->data, pFrameRGBA->linesize, buffer, AV_PIX_FMT_RGBA, pCodecCtx->width, pCodecCtx->height, 1); // 由于解码出来的帧格式不是RGBA的,在渲染之前需要进行格式转换 struct SwsContext *sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, AV_PIX_FMT_RGBA, SWS_BILINEAR, NULL, NULL, NULL); int frameFinished; AVPacket packet; while(av_read_frame(pFormatCtx, &packet)>=0) { // Is this a packet from the video stream? if(packet.stream_index==videoStream) { // Decode video frame avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet); // 并不是decode一次就可解码出一帧 if (frameFinished) { // lock native window buffer ANativeWindow_lock(nativeWindow, &windowBuffer, 0); // 格式转换 sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGBA->data, pFrameRGBA->linesize); // 获取stride uint8_t * dst = windowBuffer.bits; int dstStride = windowBuffer.stride * 4; uint8_t * src = (uint8_t*) (pFrameRGBA->data[0]); int srcStride = pFrameRGBA->linesize[0]; // 由于window的stride和帧的stride不同,因此需要逐行复制 int h; for (h = 0; h < videoHeight; h++) { memcpy(dst + h * dstStride, src + h * srcStride, srcStride); } ANativeWindow_unlockAndPost(nativeWindow); } } av_packet_unref(&packet); } av_free(buffer); av_free(pFrameRGBA); // Free the YUV frame av_free(pFrame); // Close the codecs avcodec_close(pCodecCtx); // Close the video file avformat_close_input(&pFormatCtx); return 0;}
这时就可以执行编译了,完成就能发现会多一个so,就是那个libyuiopffmpeg.so,这就是我编译出来的。
这时候,在Android studio下的main目录下,建立一个jniLibs,把刚刚那个so拷贝到这里就行了,就只要这个so就可以了。不同平台,如arm,x86,这些文件夹名字不能随便改。记得还要加如下配置,在build.gradle中。
接下来就运行这个工程,运行时,又有问题。
然后发现原来没有加网络权限,在AndroidManifest.xml中,加上
<uses-permission android:name="android.permission.INTERNET" />
最后,就可以运行了,效果图,如下: