Suppose that I have the following string:
http://www.domain.com/?s=some&two=20
How can I take off what is after &
includin
>>str = "http://www.domain.com/?s=some&two=20"
>>str.split("&")
>>["http://www.domain.com/?s=some", "two=20"]
Well, to answer the immediate question:
>>> s = "http://www.domain.com/?s=some&two=20"
The rfind
method returns the index of right-most substring:
>>> s.rfind("&")
29
You can take all elements up to a given index with the slicing operator:
>>> "foobar"[:4]
'foob'
Putting the two together:
>>> s[:s.rfind("&")]
'http://www.domain.com/?s=some'
If you are dealing with URLs in particular, you might want to use built-in libraries that deal with URLs. If, for example, you wanted to remove two
from the above query string:
First, parse the URL as a whole:
>>> import urlparse, urllib
>>> parse_result = urlparse.urlsplit("http://www.domain.com/?s=some&two=20")
>>> parse_result
SplitResult(scheme='http', netloc='www.domain.com', path='/', query='s=some&two=20', fragment='')
Take out just the query string:
>>> query_s = parse_result.query
>>> query_s
's=some&two=20'
Turn it into a dict
:
>>> query_d = urlparse.parse_qs(parse_result.query)
>>> query_d
{'s': ['some'], 'two': ['20']}
>>> query_d['s']
['some']
>>> query_d['two']
['20']
Remove the 'two'
key from the dict:
>>> del query_d['two']
>>> query_d
{'s': ['some']}
Put it back into a query string:
>>> new_query_s = urllib.urlencode(query_d, True)
>>> new_query_s
's=some'
And now stitch the URL back together:
>>> result = urlparse.urlunsplit((
parse_result.scheme, parse_result.netloc,
parse_result.path, new_query_s, parse_result.fragment))
>>> result
'http://www.domain.com/?s=some'
The benefit of this is that you have more control over the URL. Like, if you always wanted to remove the two
argument, even if it was put earlier in the query string ("two=20&s=some"
), this would still do the right thing. It might be overkill depending on what you want to do.
You need to split the string:
>>> s = 'http://www.domain.com/?s=some&two=20'
>>> s.split('&')
['http://www.domain.com/?s=some', 'two=20']
That will return a list as you can see so you can do:
>>> s2 = s.split('&')[0]
>>> print s2
http://www.domain.com/?s=some
string = 'http://www.domain.com/?s=some&two=20'
cut_string = string.split('&')
new_string = cut_string[0]
print(new_string)
You can use find()
>>> s = 'http://www.domain.com/?s=some&two=20'
>>> s[:s.find('&')]
'http://www.domain.com/?s=some'
Of course, if there is a chance that the searched for text will not be present then you need to write more lengthy code:
pos = s.find('&')
if pos != -1:
s = s[:pos]
Whilst you can make some progress using code like this, more complex situations demand a true URL parser.
s[0:"s".index("&")]
what does this do: