文章目录
- 1 AudioTrack创建时获取output
- 2 getOutputForAttr函数内的获取流程
- 2.1 attr的获取
- 2.2 attr -> strategy
- 2.3 strategy -> device
- 2.3 device -> output
- outputdevice的定义
本文根据zhuyong006的音频输出设备是如何决定的文章进行总结。
1 AudioTrack创建时获取output
AudioTrack的函数调用情况如下:
AudioTrack::AudioTrack
|- AudioTrack::set
|- AudioTrack::createTrack_l
|- AudioSystem::getOutputForAttr
其中AudioSystem::getOutputForAttr()函数根据attr获取output。
2 getOutputForAttr函数内的获取流程
attr -> strategy -> device -> output
2.1 attr的获取
我们从使用的地方往上推,看看attr是从哪里获取的,参考下上面的函数调用flow。
//AudioTrack.cpp
status_t AudioTrack::createTrack_l()
{
//attr源于mAttributes
audio_attributes_t *attr = (mStreamType == AUDIO_STREAM_DEFAULT) ? &mAttributes : NULL;
status = AudioSystem::getOutputForAttr(attr, &output,
mSessionId, &streamType, mClientUid,
&config,
mFlags, &mRoutedDeviceId, &mPortId);
}
//AudioTrack.h
class AudioTrack : public AudioSystem::AudioDeviceCallback
{
audio_attributes_t mAttributes;
}
status_t AudioTrack::set(
audio_stream_type_t streamType,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
size_t frameCount,
audio_output_flags_t flags,
callback_t cbf,
void* user,
int32_t notificationFrames,
const sp<IMemory>& sharedBuffer,
bool threadCanCallJava,
audio_session_t sessionId,
transfer_type transferType,
const audio_offload_info_t *offloadInfo,
uid_t uid,
pid_t pid,
const audio_attributes_t* pAttributes,//这里
bool doNotReconnect, //默认值
float maxRequiredSpeed) //默认值
{
if (pAttributes == NULL) {
//······
} else {
// stream type shouldn't be looked at, this track has audio attributes
memcpy(&mAttributes, pAttributes, sizeof(audio_attributes_t));
//处理attr
mStreamType = AUDIO_STREAM_DEFAULT;
if ((mAttributes.flags & AUDIO_FLAG_HW_AV_SYNC) != 0) {
flags = (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_HW_AV_SYNC);
}
if ((mAttributes.flags & AUDIO_FLAG_LOW_LATENCY) != 0) {
flags = (audio_output_flags_t) (flags | AUDIO_OUTPUT_FLAG_FAST);
}
// check deep buffer after flags have been modified above
if (flags == AUDIO_OUTPUT_FLAG_NONE && (mAttributes.flags & AUDIO_FLAG_DEEP_BUFFER) != 0) {
flags = AUDIO_OUTPUT_FLAG_DEEP_BUFFER;
}
}
}
AudioTrack::AudioTrack(
audio_stream_type_t streamType,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
const sp<IMemory>& sharedBuffer,
audio_output_flags_t flags,
callback_t cbf,
void* user,
int32_t notificationFrames,
audio_session_t sessionId,
transfer_type transferType,
const audio_offload_info_t *offloadInfo,
uid_t uid,
pid_t pid,
const audio_attributes_t* pAttributes, //here
bool doNotReconnect,
float maxRequiredSpeed)
: mStatus(NO_INIT),
mState(STATE_STOPPED),
mPreviousPriority(ANDROID_PRIORITY_NORMAL),
mPreviousSchedulingGroup(SP_DEFAULT),
mPausedPosition(0),
mSelectedDeviceId(AUDIO_PORT_HANDLE_NONE),
mPauseTimeRealUs(0),
mPortId(AUDIO_PORT_HANDLE_NONE),
mTrackOffloaded(false)
{
mStatus = set(streamType, sampleRate, format, channelMask,
0 /*frameCount*/, flags, cbf, user, notificationFrames,
sharedBuffer, false /*threadCanCallJava*/, sessionId, transferType, offloadInfo,
uid, pid, pAttributes, doNotReconnect, maxRequiredSpeed);
}
createTrack_l函数中,判断mStreamType是否为默认值AUDIO_STREAM_DEFAULT,是的话就把mAttributes赋给attr。
createTrack_l()函数没有入参,推断这个mAttributes应该是AudioTrack的成员变量。看来下头文件,确实是,那么再往上看。
set函数中发现是将传入的参数pAttributes按照audio_attributes_t类型赋值给mAttributes。再追踪下构造函数。
构造函数中发现是传入参数,那么确定了attr是由创建audiotrack的时候赋值的,推断应该是应用层传下来,以soundpool为例看下出处。
//soundpool.cpp
void SoundChannel::play(const sp<Sample>& sample, int nextChannelID, float leftVolume,
float rightVolume, int priority, int loop, float rate)
{
newTrack = new AudioTrack(streamType, sampleRate, sample->format(),
channelMask, frameCount, AUDIO_OUTPUT_FLAG_FAST, callback, userData,
bufferFrames, AUDIO_SESSION_ALLOCATE, AudioTrack::TRANSFER_DEFAULT,
NULL /*offloadInfo*/, -1 /*uid*/, -1 /*pid*/, mSoundPool->attributes());
}
//soundpool.cpp
class SoundPool {
const audio_attributes_t* attributes() { return &mAttributes; }
audio_attributes_t mAttributes;
}
最后一个参数就是attr的由来。(ps:最后两个参数可不写,因为设了默认值)
attributes()函数是soundpool的成员函数,只是获取其成员变量mAttributes。
看下soundpool的构造函数
//audio.h
typedef struct {
audio_content_type_t content_type;
audio_usage_t usage;
audio_source_t source;
audio_flags_mask_t flags;
char tags[AUDIO_ATTRIBUTES_TAGS_MAX_SIZE]; /* UTF8 */
} audio_attributes_t;
//SoundPool.cpp
SoundPool::SoundPool(int maxChannels, const audio_attributes_t* pAttributes)
memcpy(&mAttributes, pAttributes, sizeof(audio_attributes_t));
}
SoundPool构造函数中将传入的参数pAttributes赋值给成员变量mAttributes。
好的,就追踪到这里,soundpool的创建应该是应用层的,那么确定了,应用层传下attr。
2.2 attr -> strategy
createTrack_l的函数调用flow
AudioTrack::createTrack_l
|- AudioSystem::getOutputForAttr
|- AudioPolicyService::getOutputForAttr
|- AudioPolicyManager::getOutputForAttr
|- getStrategyForAttr
|- getDeviceForStrategy
|- AudioPolicyManager::getOutputForDevice
接下来,我们从getStrategyForAttr看起。
- getStrategyForAttr在getOutputForAttr中被调用:
//AudioPolicyManager.cpp
status_t AudioPolicyManager::getOutputForAttr(const audio_attributes_t *attr,
audio_io_handle_t *output,
audio_session_t session,
audio_stream_type_t *stream,
uid_t uid,
const audio_config_t *config,
audio_output_flags_t flags,
audio_port_handle_t *selectedDeviceId,
audio_port_handle_t *portId)
{
//获取strategy
routing_strategy strategy = (routing_strategy) getStrategyForAttr(&attributes);
//获取device类型
audio_devices_t device = getDeviceForStrategy(strategy, false /*fromCache*/);
//获取output
*output = getOutputForDevice(device, session, *stream,
config->sample_rate, config->format, config->channel_mask,
flags, &config->offload_info);
}
//routingstrategy.h
enum routing_strategy {
STRATEGY_MEDIA,
STRATEGY_PHONE,
STRATEGY_SONIFICATION,
STRATEGY_SONIFICATION_RESPECTFUL,
STRATEGY_DTMF,
STRATEGY_ENFORCED_AUDIBLE,
STRATEGY_TRANSMITTED_THROUGH_SPEAKER,
STRATEGY_ACCESSIBILITY,
STRATEGY_REROUTING,
NUM_STRATEGIES
};
头文件中看到了音频路径策略是enum类型,有这几种,最常见的是STRATEGY_MEDIA。
- getStrategyForAttr的函数定义
//AudioPolicyManager.cpp
uint32_t AudioPolicyManager::getStrategyForAttr(const audio_attributes_t *attr) {
// flags to strategy mapping
if ((attr->flags & AUDIO_FLAG_BEACON) == AUDIO_FLAG_BEACON) {
return (uint32_t) STRATEGY_TRANSMITTED_THROUGH_SPEAKER;
}
if ((attr->flags & AUDIO_FLAG_AUDIBILITY_ENFORCED) == AUDIO_FLAG_AUDIBILITY_ENFORCED) {
return (uint32_t) STRATEGY_ENFORCED_AUDIBLE;
}
// usage to strategy mapping
return static_cast<uint32_t>(mEngine->getStrategyForUsage(attr->usage));
}
具体调用就不找了,这篇文章写的很好。参考文章
2.3 strategy -> device
- 调用
//audio.h
typedef uint32_t audio_devices_t;
//AudioPolicyManager.cpp
status_t AudioPolicyManager::getOutputForAttr(const audio_attributes_t *attr,
audio_io_handle_t *output,
audio_session_t session,
audio_stream_type_t *stream,
uid_t uid,
const audio_config_t *config,
audio_output_flags_t flags,
audio_port_handle_t *selectedDeviceId,
audio_port_handle_t *portId)
{
audio_devices_t device = getDeviceForStrategy(strategy, false /*fromCache*/);
}
注意一下,audio_devices_t变量的类型是uint32_t,这里的device只是设备类型,不是真的device。
- getDeviceForStrategy函数定义
audio_devices_t AudioPolicyManager::getDeviceForStrategy(routing_strategy strategy,
bool fromCache)
{
// Routing
// see if we have an explicit route
// scan the whole RouteMap, for each entry, convert the stream type to a strategy
// (getStrategy(stream)).
// if the strategy from the stream type in the RouteMap is the same as the argument above,
// and activity count is non-zero and the device in the route descriptor is available
// then select this device.
for (size_t routeIndex = 0; routeIndex < mOutputRoutes.size(); routeIndex++) {
sp<SessionRoute> route = mOutputRoutes.valueAt(routeIndex);
routing_strategy routeStrategy = getStrategy(route->mStreamType);
if ((routeStrategy == strategy) && route->isActive() &&
(mAvailableOutputDevices.indexOf(route->mDeviceDescriptor) >= 0)) {
return route->mDeviceDescriptor->type();
}
}
if (fromCache) {
ALOGVV("getDeviceForStrategy() from cache strategy %d, device %x",
strategy, mDeviceForStrategy[strategy]);
return mDeviceForStrategy[strategy];
}
#if 0
return mEngine->getDeviceForStrategy(strategy);
#else
/* OEM 20171027 Jack W Lu, Add Audio Policy Setting {*/
audio_devices_t device2;
char project_id[PROPERTY_VALUE_MAX];
device2 = mEngine->getDeviceForStrategy(strategy);
// property_get("ro.product.model", project_id, "");
if(userManualAudioPolicy != 0)
{
ALOGE("enforce User Strategy : %d", userManualAudioPolicy);
enforceUserStrategy(device2, userManualAudioPolicy);
}
ALOGV("proj=%s, device2 = %d", project_id, device2);
return device2;
#endif
/* OEM 20171027 Jack W Lu, Add Audio Policy Setting }*/
}
//AudioPolicyManager.h
//头文件中定义了该数组
audio_devices_t mDeviceForStrategy[NUM_STRATEGIES];
//初始化mDeviceForStrategy数组
void AudioPolicyManager::updateDevicesAndOutputs()
{
for (int i = 0; i < NUM_STRATEGIES; i++) {
mDeviceForStrategy[i] = getDeviceForStrategy((routing_strategy)i, false /*fromCache*/);
}
mPreviousOutputs = mOutputs;
}
getDeviceForStrategy函数就是遍历了mDeviceForStrategy数组,通过strategy获取需要的device。
mDeviceForStrategy的初始化函数被调用我没有查,有空再细看
总结:这里只需要知道通过strategy获取了device 类型。
具体的函数调用查看参考文章。
2.3 device -> output
这个部分可以参考我的文章:Android获取音频输出getOutput和设置
audio_io_handle_t AudioPolicyManager::getOutputForDevice(
audio_devices_t device,
audio_session_t session,
audio_stream_type_t stream,
uint32_t samplingRate,
audio_format_t format,
audio_channel_mask_t channelMask,
audio_output_flags_t flags,
const audio_offload_info_t *offloadInfo)
{
//······
non_direct_output:
if ((flags & AUDIO_OUTPUT_FLAG_HW_AV_SYNC) != 0) {
return AUDIO_IO_HANDLE_NONE;
}
if (audio_is_linear_pcm(format)) {
SortedVector<audio_io_handle_t> outputs = getOutputsForDevice(device, mOutputs);
flags = (audio_output_flags_t)(flags & ~AUDIO_OUTPUT_FLAG_DIRECT);
output = selectOutput(outputs, flags, format);
}
return output;
}
这里有两种情况
- 重新打开output,参考我的文章
- audiopolicymanager构造函数中打开了outputs中选择一个合适的output
选择方法:
传入的device有对应的outputs表(这些outputs支持该类型的device),根据flags音频标识获取output。
总结:有device类型获取output
outputdevice的定义
audiopolicymanager构造函数中解析了config文件,config文件中包含了特定output下的设备信息,具体的device调用之后再看。