问题
I am attempting to use the Google Places API in order to get the place name of a location I am in.
The returned data structure has the following types:
descriptor1: 'street number' | 'neighborhood' | 'postcode' | 'route' | 'locality' | 'postal_town' | 'administrative_area_level_2' | 'administrative_area_level_1' | 'country'
places: [
{
address_components: [{
long_name: 'string',
short_name: 'string',
types: {
0: descriptor1,
1?: descriptor2
}
}],
other_fields not relevant here
}
]
There is no guarantee how many address components any given place will have, or if any of them even have any. There is no guarantee what types will and will not be represented.
I would like to write code that returns the long_name of the first address_component which has a field R.get(R.lensPath('types', '0'))
of 'neighborhood'
if one exists locality
otherwise, then postal_town
, administrative_area_level_2
, then administrative_area_level_1
and then country
.
So I start with R.pluck('address_components', places)
. Now I could construct an object, reduce the list down to an object, insert the first of each of the keys I am interested in into the object and then find a value. something like:
const interestingTypes = ['neighborhood', 'locality', 'postal_town', 'administrative_area 2', 'administrative_area_1', 'country']
const res = R.mergeAll(R.pluck('address_components', places).map((addressComponentList) => addressComponentList.reduce((memo, addressComponent) => {
if (interestingTypes.indexOf(addressComponent.types[0]) !== -1) {
if (!memo[addressComponent.types[0]]) {
memo[addressComponent.types[0]] = addressComponent.long_name
}
}
return memo
},{})))
res[R.find((type) => (Object.keys(res).indexOf(type) !== -1), interestingTypes)]
While it is certainly true that this can be made marginally more idiomatic be replacing all of the native .reduce
and .map
with R.map
/R.reduce
this does not really address the fundamental problems.
1) This will iterate through every single member of the list even after finding the result.
2) The resulting structure still needs to be iterated(with the find for instance) to actually find the tightest bound.
What would a pure functional, preferably lazy implementation of this look like? What features of Ramda could come in handy? Could I use lenses for this in some way? Function composition? Something else?
And is it ok to mix and match native map
/reduce
with ramda? Surely native calls are better than library invocations whenever possible?
回答1:
One approach would be to create a lazy version of R.reduceRight
:
const lazyReduceR = R.curry((fn, acc, list) => {
function _lazyReduceR(i) {
return i === list.length
? acc
: fn(list[i], () => _lazyFoldR(i + 1))
}
return _lazyReduceR(0)
})
This could then be used to create a function that will find the minimum element of a (non-empty) list, with a known lower bound:
const boundMinBy = R.curry((byFn, lowerBound, list) =>
lazyReduceR((x, lzMin) => {
if (byFn(x) === lowerBound) {
return x;
} else {
const min = lzMin()
return byFn(x) < byFn(min) ? x : min
}
}, list[0], R.tail(list)))
If the lower bound is ever encountered, the recursion stops and returns that result immediately.
With boundMinBy
available, we can create a lookup table of address types to sort order values:
const sortOrder = {
neighborhood: 0,
locality: 1,
postal_town: 2,
administrative_area_level_2: 3,
administrative_area_level_1: 4,
country: 5
}
Along with a function that will produce a sort order value for a given address component:
const sortOrderOfAddress = address => sortOrder[address.types[0]]
And then we can compose it altogether with a pipeline such as:
const process = R.pipe(
R.prop('places'),
R.chain(R.pipe(
R.prop('address_components'),
R.unless(
R.isEmpty,
R.pipe(
boundMinBy(sortOrderOfAddress, 0),
R.prop('long_name'),
R.of
)
)
))
)
R.chain
is used above to concatenate the addresses of all places and filter out any addresses of places where address_components
is empty.
I have included an example in the snippet below if you want to test it out with some data.
const lazyReduceR = R.curry((fn, acc, list) => {
function _lazyReduceR(i) {
return i === list.length
? acc
: fn(list[i], () => _lazyReduceR(i + 1))
}
return _lazyReduceR(0)
})
const boundMinBy = R.curry((byFn, lowerBound, list) =>
lazyReduceR((x, lzMin) => {
if (byFn(x) === lowerBound) {
return x;
} else {
const min = lzMin()
return byFn(x) < byFn(min) ? x : min
}
}, list[0], R.tail(list)))
const sortOrder = {
neighborhood: 0,
locality: 1,
postal_town: 2,
administrative_area_level_2: 3,
administrative_area_level_1: 4,
country: 5
}
const sortOrderOfAddress = address => sortOrder[address.types[0]]
const process = R.pipe(
R.prop('places'),
R.chain(R.pipe(
R.prop('address_components'),
R.unless(
R.isEmpty,
R.pipe(
boundMinBy(sortOrderOfAddress, 0),
R.prop('long_name'),
R.of
)
)
))
)
////
const data = {
places: [{
address_components: [{
long_name: 'a',
types: ['country']
}, {
long_name: 'b',
types: ['neighborhood']
}, {
long_name: 'c',
types: ['postal_town']
}]
}, {
address_components: [{
long_name: 'd',
types: ['country']
}, {
long_name: 'e',
types: ['locality']
}, {
long_name: 'f',
types: ['administrative_area_level_2']
}]
}]
}
console.log(process(data))
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.23.0/ramda.min.js"></script>
回答2:
And is it ok to mix and match native map/reduce with ramda?
Absolutely. But you have to also consider the cognitive costs involved in combining them.
Surely native calls are better than library invocations whenever possible?
Better how? Library functions are designed to avoid some unfortunate complexity in the specifications for the native functions.
Moreover, when Ramda's core functions were written, they had significantly better performance than their native counterparts. That may have changed as the native engines have progressed and as Ramda's functions have taken on more complexity. But it's also quite possible that it hasn't.
来源:https://stackoverflow.com/questions/44131662/ramda-js-best-way-to-get-the-tightest-geographic-bound